Skip to main content
Code coverage in pipelines

Code coverage per feature is displayed in pipelines, aiding pre-merge quality gates

Toby Newman avatar
Written by Toby Newman
Updated over 2 weeks ago

Gearset can run your Apex unit tests as part of the validation of features in your pipeline. Within these checks, Gearset will report back the code coverage result of any tests run.

In the below example, feature TN-4059[...] is queued & ready to merge into my Pre-Prod environment. The coverage from the validation reports back at 95.6% πŸŽ‰

Which classes are being checked?

It's dependent on the testing level specified in the underlying CI job. The level set will dictate which test classes run, which in turn triggers the check of code coverage on the corresponding Apex class(es). Consider these two examples:

Example 1

  • The CI job's validation is set to: "Run tests > Specify tests to run > Tests that match naming conventions for Apex changes in the deployment"


    You can configure it in the Test section of your CI job.

  • In my feature branch I commit ClassA and ClassATest, as I want to deploy these classes to Pre-Prod. No other changes are committed.
    ​

  • Due to the testing level in my CI job, Gearset automatically picks up ClassATest as a component I committed, and runs that test only.
    ​

  • Code coverage on that class comes back at 89%

Example 2

  • The CI job validation is set to "Run tests: All your tests" (as on below image).

  • Similar to Example 1, I commit ClassA and ClassATest to my feature branch.
    ​

  • Gearset will run ClassATest and any tests in the target org.
    ​

  • Code coverage comes back at 95.6% (the 89% from ClassATest + the result of all tests classes combined in the org)

Why?

Teams may choose to use this information as part of their review process. For example, while Salesforce mandates a minimum of 75% coverage on classes deploying to Production (and will return a failed validation for any class not reaching this minimum), we may decide that 1/4 of our codebase potentially being uncovered by tests is still a big gap.

We implement a "minimum 85%" rule in our team. When our Team Lead reviews a PR in pipelines, they use this % report as part of their decision for approving the deployment.

Considerations

  • Apex tests will kick off automatically as soon as a feature targets a pipelines environment, provided that 'Validate pull requests...' option is enabled in your CI job.

    The following must also complete and pass first in order for the code coverage check to then trigger:
    ​

  • The total code coverage in the Salesforce org is essentially a cumulative total since the last time the org total was reset. This reset occurs automatically during each deployment but can also be done manually. Resetting your tests ensures that coverage metrics accurately reflect the most recent changes and provides a clean slate for evaluating current code quality.

  • Classes for which a corresponding test class isn't run will return as 0% coverage, which could in turn impact the overall coverage reported by this feature. It is important to have Gearset run an appropriate testing level per your sandbox's / org's requirements.*

    A best practice consideration is to mandate all Apex classes included in a PR are also accompanied by a test class. This could be manually checked at the review stage, by the person(s) reviewing and approving work.
    ​

    • *If your CI job is set to "Run tests: All your tests", Gearset will check all classes, and those without test coverage from any test class in the sandbox will return 0%.

      Using option "Specify tests to run" can give more flexibility over which classes are checked, avoiding a check on those which aren't necessary for the deployment in question.

Did this answer your question?