Skip to main content

Measuring your DevOps performance

Leverage the DORA metrics to track your DevOps performance in Pipelines

Brandon Chin Loy avatar
Written by Brandon Chin Loy
Updated over 3 weeks ago

Understanding your DevOps performance is crucial for continuous improvement. This dashboard brings key DORA (DevOps Research and Assessment) metrics directly into Gearset, offering insights into your team's performance.

The DORA metrics

DORA is a group of metrics originally pioneered by the team at Google, and has become the universal standard by which we measure DevOps performance:

  • Deployment frequency β€” how often an organization successfully releases to production.

  • Lead time for changes β€” the amount of time it takes a pull request (PR) to get into production.

  • Time to restore β€” the time it takes to recover from a failure in production.

  • Change failure rate β€” the percentage of deployments causing a failure in production.

The first two metrics measure the speed of your release pipeline, while the other two focus on its stability. They're a crucial way for team leads and managers to understand their DevOps process. You can read more information on our blog.

Getting started

  • You'll need either a Teams Automation Platform licence or an Enterprise Automation Platform license to use this feature.

  • The dashboard can only be accessed by team owners.

  • These metrics rely on the usage of the Gearset pipelines feature with the promotion branch strategy.

  • For more in-depth information about the underlying data that powers this dashboard, please refer to the Gearset Reporting API documentation.

To access the dashboard:

1) Navigate to your account settings by clicking the Gearset icon in the top right and clicking My account as seen below:

2) Under reporting, navigate to the DevOps performance page.

Navigating the dashboard

Filters

There are three options to customise your view of the data:

  • Pipeline: Select the pipeline you wish to report on. This is particularly useful for teams with multiple pipelines for different projects or teams, allowing you to focus on the relevant data.

  • Date Range: Choose the time period you want the data to cover. This filter impacts all metrics displayed on the dashboard.

  • Step: This filter specifically controls the granularity of the data presented in the graphs (the horizontal axis). You can choose to visualise the data split by daily, weekly or monthly intervals.

When you first open the dashboard, it automatically loads with some default selections to give you an immediate view of your performance. The first pipeline in your list will be chosen (or your only pipeline, if you just have one). You'll see data from the last 7 days, up to and including today, with the graph split into daily intervals. You are free to adjust these filters thereafter.

Once you've adjusted your filters, click "Apply" to retrieve new data.

Deployment overview

This section gives you a quick snapshot of your team's performance within the selected date range:

Here you'll find:

  • Total deployments: The total number of successful deployments that occurred within your selected date range (including the start and end dates).

  • Average deployments per day: Calculated as the total number of successful deployments divided by the number of days in your chosen date range.

  • Deployment success rate: This is the percentage of successful deployments out of the total number of deployments.

  • Lead time for changes: This is the average time it takes for a PR to get into production, based on all PRs merged into their final environment during the selected time period.

It's important to note that deployment metrics are based on production deployments. Also, a deployment is counted as "successful" only if all intended metadata (and data, if applicable) was deployed.

A deployment is considered 'partially successful' when metadata deploys to the target, but the configuration data (like CPQ) or Vlocity fails to deploy. For the purpose of this dashboard, all partially successful deployments are counted as failures.

For each metric in this section, you'll see a percentage change:

This comparison shows how your current values (for the selected date range) compare against the immediately preceding period of the same length:

  • If you select a two-week date range, the dashboard will show your values for that period and compare them to the previous two-week period.

  • Similarly, if you select a 10-day period, the comparison will be against the preceding 10-day period.

Deployment frequency

Deployment frequency metrics provide a clear overview of how often your team is releasing changes to production:

This graph displays the number of successful deployments (on the vertical axis) against the chosen time period (on the horizontal axis).

The Step filter at the top controls how the time is broken down on the horizontal axis for all graphs on the dashboard.

Lead time for changes

Lead time for changes measures the amount of time it takes a pull request to get into production from when it is first created in your pipeline. You will see a graph illustrating your lead time trend over time:

This graph displays the average, minimum and maximum lead time (on the vertical axis) against the chosen time period (on the horizontal axis).

The "Breakdown across static environments" section provides the average times PRs spend waiting against each static environment in your pipeline, allowing you to identify potential bottlenecks in your pipeline. As of now, this is only available for production, but will be expanded to all static environments in the future.

Bugs overview

This section outlines an overview around the number of bugs reported by your team within Pipeline promotions.

This is different from the other metrics which are currently available in the Reporting API, where Change failure rate and Mean time to restore are calculated through our own heuristics (you can read more about it here).

While fits certain cases, different teams always have a different outlook of what makes deployment in production a "failure".

You can read more about how your team can utilise change failure management in this article.

Change failure rate

Gives a percentage of deployments that have introduced a bug.

Mean time to restore

Tracks how long it takes to recover from a deployment that introduced a bug. The timer starts when the affected deployment is released (not when the deployment was marked as a "failure") and ends when the corrective deployment is completed.

Caveats and limitations

There are a few things worth keeping in mind when viewing DevOps performance metrics.

Upstream back propagated PRs

Upstream back propagated PRs that are deployed to neighbouring production environments are included in DORA metrics.

Multiple Pipelines with shared repository

Not as common, but if you setup multiple pipelines + webhooks for the same repository, it will create duplicate PRs, which, when deployed, will count towards your DORA metrics.

Layered modules

Layered modules pipelines are currently unsupported by the performance page.

Pipelines with disabled CI jobs

DORA metrics rely on activity within the CI Jobs. Since disabled CI Jobs don't do anything, you won't see any metrics around them either.

Paused pipelines

Similarly to disabled CI Jobs - DORA metrics does not track paused pipelines.

Feedback

We are actively collecting feedback around the metrics you wish to see. If you have any thoughts or suggestions, please get in touch with us via Intercom.

Did this answer your question?