Testing Windows installers with Jest
Quentin Ménoret
Posted on January 11, 2021
At Doctolib, we've been building native software for a few years now, which we install directly on doctor's computers to improve their Doctolib experience.
Of course, we do write unit and integration tests to make sure that the software does what it is supposed to. But sometimes, it's not enough.
Once, we made a major change to a feature and for the new code to work properly, the updater needed to write a new line in the software's config file. Of course we manually tested that a fresh install would write this line. "That should be enough right?", we thought until the support calls started coming in.
Apparently, this tiny line was not written when the installer ran in update mode. Luckily we only had a few beta testers at that time, so that the error had only a low impact.
Today we can't afford to make a mistake like that again.
So what do we do now?
We release new versions of our native software weekly, and we need the updates to be thoroughly tested.
The risk of making a mistake with an installer is that it might completely corrupt your software. For example, an update could cause a crash on startup, which would stop users from even being able to subsequently update to a version with a fix. Imagine this happening on tens of thousands of computers at the same time. It would be impossible to fix, unless you call all 20k users one after another.
On the other hand, testing installers and executables is really hard. Most companies actually test those manually like we used to do. To automate this process, you cannot write unit tests: you have to completely install your software, validate that everything works properly and that all files are copied in the right place. On top, you need to do this for all supported operating systems and architectural varieties.
This is why we built several tools and processes to allow us to run end-to-end tests for installers and executables on our CI. In the rest of this post I will walk you through the process of creating such a safety net.
Setup
Stateless environment
Before you start, you will need to set up a stateless environment where you can run your tests. We chose Azure Devops because it allows us to run tests on Windows, Linux and Mac but there are other CI providers that offer the same service.
It is important that the environment is stateless because installers have the tendency to persist a lot of things which are a pain to cleanup after each run (files, registry, permissions…). If you don't isolate the test runs, you might have unexpected behaviours, or worse, false positives.
Just picture a case in which your installer needs to write a critical registry key. Unfortunately, your feature is broken. If your environment is not cleaning up the registry, the next run will be green, no matter how broken your code might be, since the registry key was written in a previous test run.
Headless testing
Most installers have a tendency to ask you stuff. Unfortunately, it's harder to simulate user inputs on an installer UI compared to a browser. So you'll need to skip that part.
With a standard NSIS installer (which is what we use at Doctolib) this means running the installer with the /S argument (silent mode - runs the installer without a UI). There are operations that can block the installer in silent mode, leaving you waiting forever. For those situations we came up with two solutions:
- Use IfSilent to explicitly skip blocking operations.
- Use registry keys instead of user inputs. Inside of your installer you can check for the presence of registry keys (which is never the case in production) and use that value in the tests instead of prompting the user.
Once your installer is able to run in non-interactive mode, you can start the actual testing.
File system and registry checks
Now we can talk about ensuring the installer works fine. Let's run it:
import { exec } from "child_process"
await promisify(exec)(`"${installerPath}" /S`)
What you want to assert is that your files get copied to the right place. This is very easy to do with Jest. Use snapshot testing:
try {
// You can snapshot test the content of all the folders you
// install files in, such as your AppData folder
const entries = await fsPromises.readdir(folder)
expect(entries).toMatchSnapshot('entries in folder')
} catch (err) {
expect('no folder').toMatchSnapshot('entries in folder')
}
You could also take a snapshot of the content of the registry if you save any important values there:
import { list } from 'regedit'
const values = await promisify(list)(yourKey)[yourKey]
expect(values).toMatchSnapshot()
Same thing for the content of any text/config files you write. And since values are sometimes dynamic, you'll want to use property matchers on the snapshot's file content after parsing:
const config = ini.parse(fs.readFileSync('./config.ini'))
expect(config).toMatchSnapshot({
my_section: {
my_value: jest.stringMatching(/expected_value/)
}
})
Testing the binary architecture
When building native software for Windows, you are often building for either a 32-bit or 64-bit architecture. It is critical to get this right, if you don't want your process to crash.
If you were to ship a 64 bit exe for a 32 bit computer, or mixed architectures for your EXE and DLLs, your program would most likely not work, or even crash. This is why we built windows-binary-architecture. With this module you can easily assert the architecture of your binaries (exe or dll):
const arch = await getTargetArchitecture(filePath)
expect(arch).toBe('I386')
Ensuring binaries are signed
Unless you want your users to see those annoying SmartScreen messages, you will have to sign your binaries.
There are probably a lot of files to sign, and it's a tedious process to check them manually.
Don't worry, we got your back here too! We wrote a small JavaScript tool to assert that a binary is properly signed: authenticode. It is pretty easy to use:
const signature = await getAuthenticode(filePath)
expect(signature.Status).toBe(SignatureStatus.Valid)
Actually starting the thing
The last thing you might want to do is to actually start your process. This highly depends on the type and size of software you are building. The following test cases might not be worth it if your software takes 15 minutes to boot up.
But if you can afford it, there are three things you can easily get out of this:
Is the process crashing when starting?
return new Promise((resolve, reject) => {
const process = spawn(yourProcessPath)
process.on('error', () => reject())
process.stdout.on('data', () => {
// Maybe if the process starts writing on stdout
// it means it is working? Depends on your software!
resolve(process)
})
})
Is the process writing anything to stdout / stderr that you should be worried about?
const process = spawn(yourProcessPath)
let stdout = ''
let stderr = ''
process.on('error', () => reject())
process.stdout.on('data', data => {
stdout += data
})
process.stderr.on('data', data => {
stderr += data
})
// You will need to implement custom logic to know when your process
// is "ready"
await processInitOver()
expect(stdout).toMatchSnapshot()
expect(stderr).toMatchSnapshot()
Is the process loading the DLLs you expect it to load?
It's pretty easy to test this using the listDlls executable:
listDllsOutput = (await exec(./Listdlls.exe ${processName} /accepteula`)).stdout
expect(listDllsOutput).toMatchSnapshot()
The Mac and Linux case
We focused a lot on Windows tests here but you can implement the same thing for Linux and Mac as well! If you want to achieve this, feel free to look at our jest-os-detection module, which allows you to run the same test suite on a different OS.
Conclusion
At Doctolib, we enforce that every single piece of software we build is tested. We extend this beyond pure feature testing (eg. unit or integration tests), and also test the output of our automated build processes. This includes installers and binaries.
These installer tests have protected us several times from serious issues that could have otherwise reached production and would have had significant consequences. A few months ago, we refactored our build pipelines, and almost published unsigned binaries. Our tests saved us.
If you like tests as much as we do, don't hesitate to subscribe to the docto-tech-life newsletter to receive our weekly selection of technical content!
Posted on January 11, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.