How We Sped Up Rubocop Linting in our CI by 22x
Jc
Posted on May 12, 2023
At Jobber, we have been utilizing the GitHub merge queue as a way to run additional checks on code that is about to be merged - and we want this merge queue step to be fast (the target is under five minutes).
We realized it would be very useful to have our Rubocop linting run in the merge queue, particularly when there were rule changes or new custom rules added. The problem is that the linting step takes nearly 7 minutes to run on our largest codebase- much too long for our merge queue target.
Investigating Caching
The way Rubocop was being invoked in CI was with the following command:
bundle exec rubocop
But what about caching? Without explicit management of data from previous jobs, Rubocop would be starting from scratch on every CI run. Does it support caching, and could we leverage that?
It turns out that Rubocop actually has a solid caching implementation that takes care of all the heavy lifting, including cache invalidation:
Later runs will be able to retrieve this information and present the stored information instead of inspecting the file again. This will be done if the cache for the file is still valid, which it is if there are no changes in:
the contents of the inspected file
RuboCop configuration for the file
the options given to rubocop, with some exceptions that have no bearing on which offenses are reported
the Ruby version used to invoke rubocop
version of the rubocop program (or to be precise, anything in the source code of the invoked rubocop program)
The cache is automatically pruned based on file count:
Each time a file has changed, its offenses will be stored under a new key in the cache. This means that the cache will continue to grow until we do something to stop it. The configuration parameter AllCops: MaxFilesInCache sets a limit, and when the number of files in the cache exceeds that limit, the oldest files will be automatically removed from the cache.
This is amazing - a well thought-out cache invalidation strategy! The second point related to file changes getting stored under a new key doesn’t really help us though - the CI cache mechanism is immutable.
Leveraging Rubocop Caching in CI
We can’t directly ask Rubocop what it’s going to do ahead of time (there’s no API for its caching behavior), so how do we deterministically generate a cache key for our immutable cross-workflow cache that changes in lock-step with Rubocop’s cache invalidation logic?
Periodic Invalidation
Can we side-step that problem and just re-generate the cache periodically? Maybe daily, or weekly, and re-use it across all CI runs? Sure! That would certainly help - but it has the following limitations:
Cache hits decrease over time as files are modified. Probably not a problem unless a large swathe of the codebase is modified within the cache period (something like a linting autofix, or a refactor / rename).
If Rubocop decides to invalidate the cache, you’ll be right back to full-length linting durations until the next cache period occurs. The most common trigger for this is a change to Rubocop configuration.
The first run after each cache period will be full-length.
Shortening the cache period to mitigate some of the above issues has the side effect of increasing the amount of cache storage consumed by your project.
Intelligent Dynamic Invalidation
What if we could integrate Rubocop’s internal cache invalidation logic with the CI’s cache invalidation logic? The limitations turn into a single bullet point:
- Cache hits decrease over time as files are modified. Probably not a problem unless a large swathe of the codebase is modified within the cache period (something like a linting autofix, or a refactor / rename).
Note that the CI service will typically expire a cache after a maximum number of days. In our case this happens every 15 days, and so there is a natural “reset” that catches the slow cache hit decline over time as files are modified.
Here's how Jobber is powering our CI cache invalidation with Rubocop’s logic!
Before you restore the rubocop cache directory (
~/.cache/rubocop_cache
), lint a single dedicated file using the exact same command and configuration that the full linting step uses.Inspect what Rubocop wrote into the cache directory, and generate your cache key as a hash of that information - at this point, proceed with the normal restore, run, persist pattern.
Here’s how you get the text you want to hash - assuming you used a file that is highly unlikely to change for your detection, this essentially represents a Rubocop cache key:
$ find ~/.cache/rubocop_cache -type f
/home/circleci/.cache/rubocop_cache/c21eac4b5c1ceb0445943396a341eadb756f46cf/7a1221dfb74d1bb683162bcc22951148cd32f1c9
Output that to a file (rubocop_cache_key
) and hash it, combine it with other environment keys, and you get a robust cache key!
Example cache key:
rubocop-v1-{{ arch }}-ruby_<< pipeline.parameters.ruby_version >>-{{ checksum "rubocop_cache_key" }}
Cache Key Part | Description |
---|---|
rubocop |
The descriptor of the cache key - this one is intended to be unique for rubocop purposes. |
v1 |
A manual version number - bump this up when there’s unexpected issues and you want a straight-forward way to explicitly invalidate the cache. |
{{ arch }} |
CircleCI notation for the architecture, such as arch1-linux-amd64-6_85 . |
ruby_<< pipeline.parameters.ruby_version >> |
The ruby version - don’t try and share caches across ruby versions. Rubocop would almost certainly invalidate the cache in this case as well, but in our case, our setup workflow detects the Ruby version and passes it onwards as a pipeline parameter so we might as well bake it in. |
{{ checksum "rubocop_cache_key" }} |
This is both the “intelligent” and the “dynamic” part - it builds on the intelligent Rubocop invalidation logic, and is dynamic because this isn’t hashing text directly under source control. See the examples below for how to generate the rubocop_cache_keyfile . |
Putting It All Together
So now we have a suitable cache key - what does it look like used in a CircleCI workflow (the following is a partial example of a CircleCI configuration file)?
references:
detect_rubocop_cache_key: &detect_rubocop_cache_key
run:
name: Detect rubocop cache key
command: bundle exec rubocop example.rb >/dev/null 2>&1 && find ~/.cache/rubocop_cache -type f > rubocop_cache_key && cat rubocop_cache_key
restore_rubocop_cache: &restore_rubocop_cache
restore_cache:
name: Restore rubocop cache
keys:
- &rubocop_cache_key rubocop-v1-{{ arch }}-ruby_<< pipeline.parameters.ruby_version >>-{{ checksum "rubocop_cache_key" }}
jobs:
lint_rubocop:
- *bundle_install
- *detect_rubocop_cache_key
- *restore_rubocop_cache
- run
name: Rubocop linting
command: bundle exec rubocop
- save_cache:
name: Save rubocop cache
key: *rubocop_cache_key
paths:
- ~/.cache/rubocop_cache
Note for Very Large Projects
If your file count is close to 20k, you’ll want to tune MaxFilesInCache
to be your max file count plus a percentage to accommodate cache misses (files changing over time, between cache invalidations).
Further Improvement Potential
Once you’ve optimized the amount of work your CI is doing for linting, you can get further gains through parallelization of that work - either the multi-threading kind, or the horizontal scaling kind (both involve the same amount of work, but leveraging more hardware to complete that work faster - usually at a monetary cost).
Performance Improvement Results
Before caching, linting took 476 seconds.
After caching, linting takes 22 seconds.
The result (476 / 22 = 21.6
): 22x faster - easily fast enough to run a full linting check in our merge queue!
About Jobber
Our awesome Jobber technology teams span across Payments, Infrastructure, AI/ML, Business Workflows & Communications. We work on cutting edge & modern tech stacks using React, React Native, Ruby on Rails, & GraphQL.
If you want to be a part of a collaborative work culture, help small home service businesses scale and create a positive impact on our communities, then visit our careers site to learn more!
Posted on May 12, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.