Public Cloud Group
Posted on October 14, 2022
Written by Jakub Grabalski
Introduction
One of our customers asked us for support in their CI/CD processes and pipelines migration. The project involved moving all their Azure DevOps pipelines to GitLab CI, sharing knowledge, best practices and providing a seamless transition to the new solution without impacting the tech team's capacity and development ability.
As the Azure DevOps pipelines had been built more than a year earlier, the customer was aware of their technical debts. For this reason, they also asked us to review and optimize the pipelines, implementing all the CI/CD best practices.
They wanted to switch to the GitLab CI completely and stop using Azure DevOps for their pipelines with as minimal interruption as possible. Their team continued to develop new features during the sprints! The plan was to switch tools over one cutover night, starting with a code freeze at the end of day one, changing the repository endpoints at the night, and starting using a GitLab CI on day two.
Leading the project as an experienced Azure DevOps and GitLab CI consultant, I also had the opportunity to dive deeper into the GitLab CI world. In this article, I'd like to share some highlights and learnings from this successful migration.
What about the tech behind the project?
The original pipelines design included a set of instructions, which compiled, tested and built a monolithic dotNET application with nodeJS frontend. Then all components were packed by another pipeline into a single Docker image with all dependencies required. The resulting image was pushed to a private Docker repository. Lastly, the Docker image was deployed with an external pipeline on an environment-specific VM instance by Ansible scripts.
The pipeline was designed to include build, test and deploy steps. There was also an additional pipeline with automated nightly QA tests. Test pipelines generated a coverage report. Each pipeline result (failure or success) was forwarded to a Slack channel.
Within the project there were the following environment configurations:
- 1 development environment
- 16 feature branch environments for development purpose
- 3 staging environments
Deployments to production were triggered manually by the operation team, not relying on any CI/CD process.
But… What is GitLab CI? What is Azure DevOps?
In short, they are CI/CD tools for automating build, test and delivery or deployment processes.
And… What does the CI/CD acronym stand for?
The "CI" refers to a continuous integration, which is an automation process for developers. With "CI", new code changes to an app are regularly built, tested and merged to a shared repository. The "CD" in CI/CD is an acronym for continuous delivery and/or continuous deployment.
Continuous delivery ("CD") means that changes to an application code are automatically tested and uploaded to a repository where they can then be manually deployed to a live environment by the operations team.
Continuous deployment (also "CD") refers to an automatic release of the application code from the repository to a live environment. This process does not require an operations team to perform a manual action.
GitLab CI and Azure DevOps are CI/CD solutions from different vendors, and offer different capabilities regarding syntax, internal tools (code repository, testing tools) and external integrations. Other well-established CI/CD tools in the market are GitHub, Jenkins, Bitbucket, and other solutions in the AWS Marketplace.
Why exactly migrate to GitLab CI?
In this case, the decision had already been made by the customer. The main reason they wanted to migrate was to leverage the benefits of the GitLab CI including integrated CVE and license scanning. Two smaller repositories with CI/CD pipelines had already been migrated by the customer.
Personally, I would choose this CI/CD tool too (and I am not the only one at kreuzwerker - see this blog post)! The Gitlab CI is open-core software (open source with some paid options) and offers many functionalities, which make it possible to collaboratively plan, build, secure, and deploy software as a complete DevOps platform. It can store code within repositories, find vulnerabilities and secrets within the code, perform tests and eventually build applications with all the CI/CD principles. If you would like to try GitLab CI, a GitLab CI Cloud hosted instance option is free for individual users with some restrictions (https://about.gitlab.com/pricing).
For additional security you can deploy your own GitLab CI workers within the AWS infrastructure with simple tools provided by the GitLab CI! If you want to build your own self-managed GitLab CI instance, you can deploy it with a predefined Docker images, Helm charts or Linux packages.
The customer configuration was a GitLab CI Cloud instance with an "Ultimate" plan with a private AWS hosted GitLab CI runners.
Journey started!
After a quick onboarding and introduction to the customer’s team, I was granted access to their GitLab CI and Azure DevOps repository and pipelines. We shared a common Slack channel for better team communication, knowledge sharing, status and budget updates.
I started by investigating the current solution, understanding the pipelines, how they are triggered, what are the outcomes and what artefacts and reports are generated. Next was a few short sync-up and QA session with their development team. When I had a complete understanding of the Azure DevOps pipelines, I started rewriting the pipelines on a mirrored repository in GitLab CI, preventing any disruption to the Azure DevOps workload.
Migration itself…
As the original build and deployment Azure DevOps pipelines were complex, with many conditions and checks, I prepared the main pipeline (gitlab-ci.yaml) file, implemented logic for conditions and added child pipelines. It was slightly different from the Azure DevOps approach. The Azure DevOps build or deployment pipeline could be triggered with conditions selected manually (by selecting boxes) before invoking a new pipeline. In the GitLab CI there is no such option possible by design. With the child pipelines implemented, I was able to optimize build and deployment pipelines by avoiding unnecessary conditions, replicate pipeline behaviour to the GitLab CI and keep them simple (KISS principle).
Mirroring every step of the original pipelines into GitLab CI syntax and features was a challenging process, and we had to agree on some minor compromises. For example, the test coverage report of the nodeJS frontend was printed only in the CI/CD console, and the Slack integration came with different notification styles. As said, minor compromises.
As it was built using dotNET libraries (Microsoft), using Azure DevOps (again, Microsoft) offered a smoother integration with its CI/CD pipelines, notably with notably with tests reports and dependency resolution. Resolving all the dotNET and nodeJS dependencies within the GitLab CI environment was indeed the most challenging part for me.
Additionally, their development team was using the Azure DevOps pipelines on a daily basis. They knew its UI and results, so I was expecting that a new UI and CI/CD solution might make the acceptance and adaptation of new CI/CD tool process longer,
I had to perform a lot of pipeline runs and tests to ensure that everything was in good working order. If the pipeline run ended with error or warning, I had to perform checks of the pipeline raw output, review artifacts produced, and check builds compiling results; test and deploy reports needed to be carried out too. All of that was done by hand, as at this point that was the only reliable way to debug the log of the new pipelines and make them comparable to the Azure DevOps ones.
Why do I say it was sometimes complex?
First let me show you the examples below.
Azure DevOps pipeline
- job: build_publish
variables:
yarn_config_cache: $(Pipeline.Workspace)/.yarn
steps:
- task: NodeTool
inputs:
versionSpec: "16.13.1"
displayName: "Install Node.js"
- task: Cache
inputs:
key: 'cache-2022-01-28 | yarn | "$(Agent.OS)" | app/yarn.lock'
restoreKeys: |
cache-2022-01-28 | yarn | "$(Agent.OS)"
path: $(yarn_config_cache)
- script: yarn install --frozen-lockfile
workingDirectory: "$(Build.SourcesDirectory)/app"
- script: yarn run build
workingDirectory: "$(Build.SourcesDirectory)/app"
- task: PublishPipelineArtifact
inputs:
targetPath: "$(Build.SourcesDirectory)/app_name/apps/build"
artifactName: "app"
artifactType: "pipeline"
GitLab CI pipeline
image: node:16.13.1
variables:
yarnCacheConfig: '.yarn/'
Build:
script:
- yarn set version v1.22.17
- yarn install --frozen-lockfile --cache-folder $yarnCacheConfig
- yarn run build
artifacts:
paths:
- $CI_PROJECT_DIR/apps/build/app_name
The pipeline sets the yarn version, installs dependencies, builds and publishes the app_name artefact in [n] lines and [m] steps. On the contrary, in GitLab CI, I could achieve the same result in just one step, by setting the correct nodeJS Docker Image (line: image: node:16:13:1) and installing the yarn dependencies in the build step.
An additional yarn set version v1.22.17 line was added during the pipeline debugging process. The initial pipeline run ended with error. I checked the raw pipeline output and realized that the Docker image chosen was setting the newest version of yarn automatically.
Sounds easy... But there were a lot of environmental and system variables specific to Azure DevOps, which required finding an equivalent in GitLab to preserve the pipeline's logic. To name one, replacing $Build.SourcesDirectory with $CI_PROJECT_DIR.
In total there were more than 4000 lines of the Azure DevOps pipeline code to review, optimize and rewrite to the GitLab CI.
As a bonus, I enabled a new feature which allowed rollback and redeploy options for all the environments deployed. By entering a variable input for manually triggered deployment, pipeline created a new (or updating an existing) development environment within the GitLab CI web console under the "Environments" section.
From there it was possible to check the environment status, start the environment by URL, perform manual tests, redeploy the last commit or rollback to the previous release.
How did it end?
With success!
On handover day, I organised a live demo and a QA session with the developers to present the new CI/CD integration and clear up any doubts. When a few days later the switch was made and Azure DevOps was shut down for good, the developers seamlessly transitioned to the new solution, with an acceptance level - in the words of the DevOps Team Lead - beyond any expectation.
Cutover part was done alone by the customer DevOps team. I was on duty just in case to support them if any issues, but everything was OK!
(Not) Last words
I have written this blog post to share my experience and thoughts about this and similar CI/CD migrations. I enjoyed the project a lot, the migration process itself and the team I had opportunity work with. The customer trusted us; we delivered!
Of course, there were some bumps in the middle of the road, such as missing or no longer necessary credentials, legacy code chunks within Azure DevOps pipelines that required refactoring or even some parts of old pipelines that had been left because of legacy. However, good communication and understanding resolved all our challenges. I promise there will be more of them, I'm sure!
For such migrations good CI/CD process understanding is required, the rest is only YAML syntax :)
Posted on October 14, 2022
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.