All Collections
Gearset API
Getting started with the Gearset Reporting API
Getting started with the Gearset Reporting API
Julian Wreford avatar
Written by Julian Wreford
Updated over a week ago

Considerations and requirements

  • You'll need either a Teams Automation Platform licence or an Enterprise Automation Platform license to use this feature

  • Three of the four metrics rely on the usage of the Gearset pipelines feature with the promotion branch strategy.

  • The heuristics - detailed below - use data from the underlying version control system (VCS) to calculate the values which are returned. Therefore, the Gearset user whose token is being used to call the endpoints must have an active authenticated connection to the relevant VCS configured in Gearset.

The Gearset Reporting API allows you to get information to measure your DevOps performance. It presents a wide range of information to enable the calculation of DORA metrics for pipelines. In this walkthrough, we'll be using the Reporting API's dynamic documentation to query it.

The DORA metrics

DORA (DevOps Research and Assessment) is a group of metrics originally pioneered by the team at Google, and has become the universal standard by which we measure DevOps performance:

  • Deployment frequency — how often an organization successfully releases to production.

  • Lead time for changes — the amount of time it takes a pull request to get into production.

  • Time to restore — the time it takes to recover from a failure in production.

  • Change failure rate — the percentage of deployments causing a failure in production.

The first two metrics measure the speed of our release pipeline, while the other two focus on its stability. They're a crucial way for team leads and managers to understand their DevOps process.

Our aim with the Gearset Reporting API is to provide a summary of the DORA metrics, but also to go further and allow you to explore your own data to dig into the resilience and velocity of your DevOps process. With this in mind, for each metric, we expose two API endpoints — one providing the raw data that can be used to calculate the metric, and another that presents aggregate information required to directly plot the DORA metric on a line graph.

Querying the Reporting API — lead time for changes

To start with, you need an API access token, which you can generate by following our tutorial Creating a Gearset API access token.

Firstly, we need to get the ID of the pipeline. To do this we go to the pipelines page and get the required IDs from the URL bar. For Lead Time we need the pipeline ID, but for some other metrics we need the environment ID.

Once you have copied the pipeline ID, we can create a request using the Reporting API dynamic documentation. First, we log in with our access token.

Enter the authorization token that we got earlier, prefixed with the word "token".

Now we can go to the Lead time for changes section, and click on the "Try it out" button.

We then fill in the ID and the time frame to query, and click "Execute". Please note that the StartDate and EndDate fields should be in UTC, and end with a capital Z to signify this.

This will send off a request to the Reporting API. Once executed, it presents the curl command that was used (so you can use it elsewhere) and the response. This can be copied or downloaded.

A similar process is used to query the aggregate endpoint for lead time. This time, instead of returning a list of pull requests, it returns a dictionary of "end" environments in the pipeline that each contain groups of data for the mean, maximum, and minimum lead time. They can be directly plotted onto a graph using other tools.

The process we used to query the API for Lead time for changes is the same process as that used to get the other three DORA metrics.

How does the Gearset Reporting API identify failures?

It is first important to note that we present two different types of failed deployments, depending on which endpoints are being used.

Under deployment frequency, the status of a deployment is whether that deployment was completed — the changes of that deployment were applied to the target. This is the same status as the one that you'll see on the deployment history page.

By contrast, a failure for the reliability endpoints is a failure where the deployment was completed successfully (i.e. a change was made to the org as expected), but the changes caused an issue in the Salesforce org, such that a fix was needed to get it to have the correct behavior. So, when we talk about change failure rate and time to restore, we are not talking about whether the deployment failed, but instead whether the changes in the deployment were correct or whether they failed in some way.

The Gearset Reporting API uses some heuristics to detect if it thinks some changes were a failure. These heuristics are as follows:

  • Hotfix: if a pull request is merged directly into the production branch, then we denote it to be a hotfix. If a hotfix has been identified, then the API presumes that the previous deployment to production was a failure. We mark it as a failure, and the hotfix as the success/fix.

  • Revert: if a pull request is a revert pull request (as generated by default by most git providers), then the API identifies the pull request, which it reverts. If the original pull request and the revert pull request are in different deployments, then Gearset will mark the deployment containing the original as a failure and the revert as the success/fix.

  • Rollback: if a deployment is rolled back using Gearset's rollback functionality, then the original deployment that was rolled back is marked as a failure and the rollback is the success/fix.

Using these heuristics, the Reporting API is able to identify failures for the purpose of measuring a team's change failure rate and time to restore.

Reporting API v2

The Reporting API v2 was introduced to support querying larger amounts of data spanning longer durations than the Reporting API v1 supports. If you find the Reporting API v1 causes errors due to timing out then the Reporting API v2 can be used instead. The same data can be retrieved for both versions of the Reporting API but their usage is different.

The Reporting API v2 uses the concept of operations with an asynchronous API that allows you to request some data to be retrieved at a later time when it is ready. It works as follows:

  1. Make a POST request to one of the Reporting API v2 endpoints (e.g. Lead time for changes) to start the operation. This endpoint returns an object with an OperationStatusId property that should be made note of.

  2. Make a GET request to the Operation status endpoint with the above OperationStatusId to check the progress of the operation. The endpoint can be polled every 5 seconds to check the current status. When the operation completes, the endpoint returns an object with an OperationResultId property that should be made note of.

  3. Make a GET request to the Operation result endpoint with the above OperationResultId to retrieve the result of the operation.

To switch to the Reporting API v2 follow the link to the Reporting API dynamic documentation and select Reporting API v2 from the Select a definition dropdown in the top-right of the page.

The dynamic documentation automatically sets the necessary api-version: 2 header name and value. If using the Reporting API v2 from a third-party application or in code then you will need to explicitly set this header name and value in order to use the correct API version.

Starting an operation — Lead time for changes

To start with, you need an API access token which is used to authorize the request. Check out the Querying the Reporting API section above on how to do this, as well as where to find your Pipeline ID.

Now we can go to the Lead time for changes section, and click on the Try it out button.

We then fill in the pipelineId and the time frame to query, and click Execute. Please note that the StartDate and EndDate fields should be in UTC, and end with a capital Z to signify this.

This will send off a request to the Reporting API v2 to start the Lead time for changes operation. Once executed, it presents the curl command that was used (so you can use it elsewhere) and the response. This can be copied or downloaded.

Take note of the OperationStatusId as this is the value we will use to poll the status of the operation.

Checking the status of an operation

We can now use the Operation status endpoint to poll for progress and check the status of the operation.

Click the Try it out button, enter the OperationStatusId from the above response, and then click Execute.

The response we receive back indicates the status of the operation. In this case it's Running and we can continue to poll this status endpoint until the status is either Succeeded or Failed.

In order to avoid being rate limited you should not poll this endpoint more frequently than once every 5 seconds.

Once the operation has completed, the endpoint returns an object with a status of Succeeded and an OperationResultId property that should be noted as this value is what will be used to retrieve the results of the operation.

If the endpoint returns a status of Failed then an Error property containing a message describing the error will be included.

Retrieving the result of an operation

We can now use the Operation result endpoint to retrieve the result of the operation.

Click the Try it out button, enter the OperationResultId from the above response, and then click Execute.

The response we receive back contains data for the Lead time for changes operation that was started in the first request we made. The results can be copied to the clipboard or downloaded using the buttons in the bottom-right of the response area.

Feedback

If you have thoughts or feedback on this document or the Reporting API functionality, please get in touch with us via Intercom. We would love to hear your suggestions. To look at some Reporting API commands, you can access the API reference here.

Did this answer your question?