Skip to main content
All CollectionsAutomationPipelinesPrerequisites for Pipeline setup
Pipeline setup - implications of in-progress work
Pipeline setup - implications of in-progress work

Key considerations when deciding whether to refresh all environments prior to setting up Gearset Pipelines

George Marino avatar
Written by George Marino
Updated over a week ago

When initially setting up a pipeline, one major consideration is in-progress/in-flight work in your org environments. There are a couple of methods to resolve this and this article is to explore each of them, so you can understand which best applies to you.

As an example, lets say that you have three environments in your upcoming pipeline; Dev, QA, Production. These environments aren't currently in sync, and there are 10 user stories and their components at the long standing environment, QA. They're yet to make their way to Production and you're curious how these components will be affected after setting up a pipeline.

A Gearset Pipeline is an automated workflow. It's designed to be an end-to-end solution for getting metadata all the way from Dev to Production. Each user story is split into its own feature branch, and promoted through each environment in turn. Considering our example above, if you're mid-sprint, and there are metadata changes at QA, these likely won't be segregated into their own feature branches, nor will they be considered by the pipeline.

To add to this, once a pipeline is setup, we'd recommend that no changes happen outside of it (no more org to org deployments), as they would push your environments out of sync with your repository. Due to this, we need to pay careful attention to those user story components and consider how we can incorporate them into the upcoming pipeline.

Method 1 - Refresh all environments(Recommended)

Release any in progress changes to production and refresh all lower environments

This is the recommended method for a successful pipeline setup. It effectively removes the need for you to consider in-progress changes as there won't be any remaining. Everything has been released already, and lower environments have been refreshed. So everything is in sync, which is an ideal position for pipeline implementation.

Method 2 - No refresh possible

Manually sync all environments and carve out in-progress work into separate features

This method isn't recommended as depending on the amount of in-progress work, it can become a significant effort for your team to follow. If you're currently not in a position to follow method 1, our initial suggestion would be to push back your pipeline implementation until you are. But if you're looking to get your pipeline setup, and aren't in any position to wait for method 1 to become available, then you can consider this method.

There are two major considerations to make in this case:

  1. In flight work needs to be accounted for - Any work that is currently in flight (hasn't been pushed through to Production), will need to be manually considered before you set up the Pipeline.

  2. Out of sync environments causing unpredictable behaviour - The Pipeline is built on Production being perfectly in sync with the Main branch, in accordance with your metadata filter. All subsequent environment branches are spun off the Main branch, meaning everything is in sync from the start. Not refreshing means there will be unique differences at each environment/stage of the Pipeline. It's highly possible that these differences cause problematic validations/deployments, which then need to be individually troubleshooted. You may not hit these errors early on in the process but could hit them towards the end (Prod), as Prod is different to those earlier environments.

This document is to provide our recommendations for both of these considerations.

Problem 1: In flight work needs to be accounted for

This section details your options for how to manage in flight work, considering the example above, 10 user stories sitting at QA. You'll need a repository in place and seeded in order to follow these steps highlighted below. This document walks you through the creation of a repository, with supplemental information in this document specifically for pipelines.

If you're using a ticketing system to track individual user stories, you should already have a pretty clear picture of the 10 features are currently sitting in QA and what components make up each feature. If you don't, then you'll need to examine your

QA environment and determine this manually. Once you have a good idea about these features, you can proceed with one of the following options:

Option 1 - Feature separation (recommended)

This option is recommended as it maintains the option for individual features to progress at their own pace throughout the pipeline, which is fundamentally how Pipelines is designed to work.

  1. Create a feature branch from Main.

  2. Run a metadata comparison between QA (source) and your new feature branch (target).

  3. Select the components that amount to a single feature's amount of work.

  4. Proceed and commit those components to your feature branch.

  5. Repeat steps 1 through 4 until all in-progress work is contained within separate feature branches.

  6. (Post-setup of Pipelines) Open a PR from each feature branch to the first environment in your Pipeline. Then propagate through accordingly until those features reach the same destination they were at previously(QA). As expected, the PR will automatically be opened against the subsequent environment, ready for further propagation when ready.

Option 2 - Feature combination

This option is simpler than option 1, however, you'll lose the feature separation. The entirety of in flight work will be treated as a single "release" user story, that needs to propagate as a whole.

  1. Create a feature branch from Main.

  2. Run a metadata comparison between QA (source) and your new feature branch (target).

  3. Select the components that amount to all 10 feature's amount of work.

  4. Proceed and commit those components to your feature branch.

  5. (Post-setup of Pipelines) Open a PR from the feature branch to the first environment in your Pipeline. Then propagate through accordingly until its reached the same destination it was at previously(QA). As expected, the PR will automatically be opened against the subsequent environment, ready for further propagation when ready.

The result following either option, is that you will have your new Pipeline setup, and all features accounted for within it.

Important to keep in mind:
When following step 5 of either option(opening that PR against the first environment), you may see that your pull request validation contains 0 components. That's because these components already exist there, so there are no differences to deploy. This exercise is to bring the environment branch into sync, and to enter these features into the pipeline for further propagation.

Problem 2: Out of sync environments causing unpredictable behaviour

As a refresh isn't possible, we just need to ensure that as much as possible is in sync. This will minimise the impact caused by environments being out of sync.

Using the metadata compare & deploy functionality, you're able to compare all the metadata in each environment to Production. Using this you can see what components are out of sync. You can then select and deploy this metadata. As there may be a significant amount of metadata out of sync, this will likely require several attempts. You can consider splitting your deployments into multiple stages to help improve success.

You need to take care not to overwrite any work in progress. Syncing as much as possible is the best step to take prior to Pipeline setup.

Questions?

We hope this article helps you to understand the considerations involved with either refreshing or not refreshing, in relation to setting up a pipeline. But if you have any questions, please get in touch via the in-app chat.

Did this answer your question?