Current limitations and requirements

The Gearset reporting API allows you to get information to measure your DevOps performance. It presents a wide range of information to enable the calculation of DORA metrics for pipelines. In this walkthrough we will be using the reporting API's dynamic documentation to query it.

The DORA metrics

DORA (DevOps Research and Assessment) is a group of metrics originally pioneered by the team at Google which have become universal standard by which we measure DevOps performance:

  • Deployment frequency - how often an organization successfully releases to production

  • Lead time for changes - the amount of time it takes a pull request to get into production

  • Time to restore - the time it takes to recover from a failure in production

  • Change failure rate - the percentage of deployments causing a failure in production.

The first two metrics measure the speed of our release pipeline while the other two focus on the stability of it. They're a crucial way for team leads and managers to understand their DevOps process.

Our aim with the Gearset reporting API is to provide a summary of the DORA metrics, but also to go further and allow you to explore your own data to dig into the resilience and velocity of your DevOps process. With this in mind, for each metric we expose two API endpoints, one providing the raw data which can be used to calculate the metric, and another which presents aggregate information required to directly plot the DORA metric on a line graph.

Querying the reporting API - lead time for changes

To start with you need an API access token which you can generate by following our tutorial Creating a Gearset API access token.

Firstly, we need to get the ID of the pipeline. To do this we go to the pipelines page and get the required IDs from the URL bar. For Lead Time we need the pipeline ID, but for some other metrics we need the environment ID.

Once you have copied the pipeline ID, we can create a request using the Gearset API dynamic documentation. First, we log in with our access token.

Enter the authorization token which we got earlier, prefixed with the word "token".

Now we can go to the Lead time for changes section, and click on the "Try it out" button.

We then fill in the ID and the time frame to query, and click "Execute".

This will send off a request to the reporting API. Once executed it presents the curl command that was used (so you can use it elsewhere) and the response. This can be copied or downloaded.

A similar process is used to query the aggregate endpoint for lead time. This time, instead of returning a list of pull requests, it returns a dictionary of "end" environments in the pipeline which each contains groups of data for the mean, maximum and minimum lead time. They can be directly plotted on to a graph using other tools.

The process we used to query the API for Lead time for changes is the same process as that which can be used to get the other three DORA metrics.

How does the Gearset Reporting API identify failures?

It is first important to note that we present two different types of failed deployments, depending on which endpoints that are being used.

Under deployment frequency the status of a deployment is whether that deployment was completed - the changes of that deployment were applied to the target. This is the same status as the one which you will see in deployment history page.

Whereas there is a different kind of failure for the reliability endpoints, this is a failure where the deployment completed successfully (i.e. changes were made to the org as expected), however, the changes caused an issue in the Salesforce org such that a fix was needed to get it to have the correct behaviour. So when we talk about change failure rate and time to restore we are not talking about whether the deployment failed, but rather whether the changes in the deployment were correct or whether they failed in some way.

The Gearset Reporting API uses some heuristics to detect if it thinks some changes were a failure. These heuristics are as such:

  • Hotfix: if a pull request is merged directly into the production branch, then we denote it to be a hotfix. If a hotfix has been identified then the API presumes that the previous deployment to production was a failure and we mark it as a failure, and the hotfix as the success/fix.

  • Revert: if a pull request is a revert pull request (as generated by default by most git providers), then the API identifies the pull request which it reverts. If the original pull request and the revert pull request are in different deployments then Gearset will mark the deployment containing the original as a failure and the revert as the success/fix.

  • Rollback: if a deployment is rolled back using Gearset's roll back functionality then the original deployment that was rolled back is marked a failure and the rollback is the success/fix.

Using these heuristics, the reporting API is able identify failures for the purpose of the measuring a team's change failure rate and time to restore.


If you have thoughts or feedback on this document or the reporting API functionality, please get in touch with us via Intercom. We would love to hear your suggestions. To look at some reporting API commands, you can access the API reference here.

Did this answer your question?