Considerations and requirements
You'll need either a Teams or an Enterprise Automation Platform license to use this feature
Three of the four metrics rely on the usage of the Gearset pipelines feature with the promotion branch strategy.
The Gearset Reporting API allows you to get information to measure your DevOps performance. It presents a wide range of information to enable the calculation of DORA metrics for pipelines. In this walkthrough, we'll be using the Reporting API's dynamic documentation to query it.
The DORA metrics
DORA (DevOps Research and Assessment) is a group of metrics originally pioneered by the team at Google, and has become the universal standard by which we measure DevOps performance:
Deployment frequency — how often an organization successfully releases to production.
Lead time for changes — the amount of time it takes a pull request to get into production.
Time to restore — the time it takes to recover from a failure in production.
Change failure rate — the percentage of deployments causing a failure in production.
The first two metrics measure the speed of our release pipeline, while the other two focus on its stability. They're a crucial way for team leads and managers to understand their DevOps process.
Our aim with the Gearset Reporting API is to provide a summary of the DORA metrics, but also to go further and allow you to explore your own data to dig into the resilience and velocity of your DevOps process. With this in mind, for each metric, we expose two API endpoints — one providing the raw data that can be used to calculate the metric, and another that presents aggregate information required to directly plot the DORA metric on a line graph.
Querying the Reporting API — lead time for changes
To start with, you need an API access token, which you can generate by following our tutorial Creating a Gearset API access token.
Firstly, we need to get the ID of the pipeline. To do this we go to the pipelines page and get the required IDs from the URL bar. For Lead Time we need the pipeline ID, but for some other metrics we need the environment ID.
Once you have copied the pipeline ID, we can create a request using the Reporting API dynamic documentation. First, we log in with our access token.
Enter the authorization token that we got earlier, prefixed with the word "token".
Now we can go to the Lead time for changes section, and click on the "Try it out" button.
We then fill in the ID and the time frame to query, and click "Execute". Please note that the StartDate and EndDate fields should be in UTC, and end with a capital Z to signify this.
This will send off a request to the Reporting API. Once executed, it presents the curl command that was used (so you can use it elsewhere) and the response. This can be copied or downloaded.
A similar process is used to query the aggregate endpoint for lead time. This time, instead of returning a list of pull requests, it returns a dictionary of "end" environments in the pipeline that each contain groups of data for the mean, maximum, and minimum lead time. They can be directly plotted onto a graph using other tools.
The process we used to query the API for Lead time for changes is the same process as that used to get the other three DORA metrics.
How does the Gearset Reporting API identify failures?
It is first important to note that we present two different types of failed deployments, depending on which endpoints are being used.
Under deployment frequency, the status of a deployment is whether that deployment was completed — the changes of that deployment were applied to the target. This is the same status as the one that you'll see on the deployment history page.
By contrast, a failure for the reliability endpoints is a failure where the deployment was completed successfully (i.e. changes were made to the org as expected), but the changes caused an issue in the Salesforce org, such that a fix was needed to get it to have the correct behaviour. So, when we talk about change failure rate and time to restore, we are not talking about whether the deployment failed, but instead whether the changes in the deployment were correct or whether they failed in some way.
The Gearset Reporting API uses some heuristics to detect if it thinks some changes were a failure. These heuristics are as follows:
Hotfix: if a pull request is merged directly into the production branch, then we denote it to be a hotfix. If a hotfix has been identified, then the API presumes that the previous deployment to production was a failure and we mark it as a failure, and the hotfix as the success/fix.
Revert: if a pull request is a revert pull request (as generated by default by most git providers), then the API identifies the pull request, which it reverts. If the original pull request and the revert pull request are in different deployments, then Gearset will mark the deployment containing the original as a failure and the revert as the success/fix.
Rollback: if a deployment is rolled back using Gearset's rollback functionality, then the original deployment that was rolled back is marked as a failure and the rollback is the success/fix.
Using these heuristics, the Reporting API is able to identify failures for the purpose of measuring a team's change failure rate and time to restore.
Feedback
If you have thoughts or feedback on this document or the Reporting API functionality, please get in touch with us via Intercom. We would love to hear your suggestions. To look at some Reporting API commands, you can access the API reference here.