LGTM Devlog 12: CI/CD with GitHub Actions to run Unit Tests and deploy Firebase Functions
Yuan Gao
Posted on January 1, 2021
Now that we have the code ready, it's time to get GitHub Actions to deploy it. Suure I could deploy by hand, but what's the fun in that? The Commit I made that completes this task is at aacc80d
Manual deploy
Before I do anything, I'm first going to manually deploy the function. I need to do this due to a recommendation that Google Cloud have of not allowing CI to enable "public unauthorized" triggering of the function. This is a security consideration, however in our case we do need to enable it, and this should only be done through a manual deploy.
So I prepare a deploy script containing the commands needed to do that:
#!/usr/bin/env bash
if [[ -z "$WEBHOOK_SECRET" ]]; then
echo "WEBHOOK_SECRET not set, run inside pipenv" 1>&2
exit 1
fi
pipenv lock -r > app/requirements.txt
gcloud functions deploy github_webhook_listener \
--entry-point github_webhook_listener \
--runtime python39 \
--trigger-http \
--allow-unauthenticated \
--memory=128MB \
--source app \
--set-env-vars=WEBHOOK_SECRET=$WEBHOOK_SECRET \
--service-account=$SERVICE_ACCOUNT
After running this (with appropriate service account set up in the env var, see my next post), this deploys the google cloud function.
In a more normal python deployment, we'd only need pipenv to manage our dependencies, however, Google Cloud Functions doesn't/can't use pipenv, and must install from a requirements.txt
so we generate one based on the Pipenv. The requirements.txt
file has actually been gitignored in case and old one is checked in by accident. We'll re-created this file from the canonical lockfile each time instead.
The CI script
Github's actions works by including yaml files into a folder called .github/workflows
. I will be adding a new workflow for each microservice. I'll break down the workflow I wrote (see above commit link) step by step, the file is here.
name: Build GitHub Webhook Listener
This line describes to GitHub the human-readable name of the workflow, which it uses to create the Workflows menu.
on:
push:
branches:
- main
paths:
- 'github_webhook_listener/**'
pull_request:
branches:
- main
paths:
- 'github_webhook_listener/**'
The next chunk of YAML describes under what condition the workflow will run. In my case, I want it run on pushes to the main
branch only for changes to the github_webhook_listener
folder. And similarly pull_requests involving the same. The intention here is to run the tests during pull_requests, and run the deployment during merges into main
.
jobs:
test:
name: Run tests
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies and run tests
working-directory: ./github_webhook_listener
run: |
pip install pipenv
pipenv install --deploy --dev
pipenv run test
- name: Send test report
uses: 5monkeys/cobertura-action@master
if: github.event_name == 'pull_request'
with:
path: ./github_webhook_listener/coverage.xml
repo_token: ${{ secrets.GITHUB_TOKEN }}
minimum_coverage: 95
skip_covered: False
show_line: True
This big chunk is the test
job, which runs the tests that I showed in the last post. The steps look like this:
- Checkout: does the git checkout so the code is here
- Setup Python: sets up python 3.9
- Install dependencies and run tests: since we manage deps using pipenv, then doing a
pipenv install --deploy --dev
will install exactly the same versions that I set up during local development. This maximizes reproducibility. After this is done, the tests are run. Though the command being run is justpipenv run test
, this is actually doing quite a bit more, the exact code is specified inside thePipfile
, but also pytest will load extra arguments specified insidepytest.ini
, so the final command that's actually being used here is more likepytest --cov --cov-config=.coveragerc --cov-report xml --cov-report term-missing
which generates our coverage XML files and reports while also specifying which folders to run tests in. - Send test report: this step uses an Action available on the GitHub Actions marketplace that takes our generated coverage XML file, and posts the results as a comment to a PR
deploy:
name: Deploy
needs: test
if: github.event_name == 'push'
environment: Prod
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install pipenv and generate requiremetns.txt
working-directory: ./github_webhook_listener
run: |
pip install pipenv
pipenv istall --deploy
pipenv lock -r > ./app/requirements.txt
# note: does not set to "allow unauthenticated" or memory, deploy manually first
- name: Deploy Cloud Function
uses: google-github-actions/deploy-cloud-functions@v0.1.2
with:
credentials: ${{ secrets.GCP_CREDENTIALS }}
service_account_email: ${{ secrets.GCP_FUNCTIONS_SERVICE_ACCOUNT }}
source_dir: ./github_webhook_listener/app
name: github_webhook_listener
runtime: python39
env_vars: "WEBHOOK_SECRET=${{ secrets.WEBHOOK_SECRET }}"
This block deals with deploying the function to Firebase. The top matter of this block states that it needs: test
, so the test must succeed before this step will run. It also requires that the event name be "push", this prevents deploying due to a PR (we only want tests to run) and will only deploy once code is merged into the main branch. Finally, environment: Prod
tells GitHub to pull the secrets out of the Prod environment (which groups secrets together).
The steps in this job are:
- Checkout: We need to checkout again
- Setup Python: same as before, install Python 3.9
- Install pipenv and generate requirements.txt: this generates the requirements.txt as discussed previously, in fact this is the only reason we need to run pipenv at all here
- Deploy Cloud Functions: this step uses an Action published on the Actions marketplace, and is maintained by Google themselves, it contains functionality needed to deploy a cloud function.
PRs and Deployment
The result of the CI script, when making a Pull Request, the GitHub Actions will be triggered, and you'll see a block like this in the PR saying that checks are running. If the tests error out, they will fail and there would be a message here, and I can turn on enforcing that checks pass before allowing a merge.
The coverage results also get posted into a comment. This is useful as a metric to decide whether to accept a Pull Request or not, or whether more tests need to be written, or particular lines to focus on in the code.
And you get a nice little green tick against your PR when the tests complete.
The running action can be viewed in the Actions section of the repo, this is what it looks like while running. Any errors in the run (due to mistakes in the CI scripts, or otherwise) would appear in here.
And this is what it looks like when complete.
A real test!
Finally, now that the function is properly deployed, we can do a real test. I simply go over to the GitHub webhooks, update the URL to the real one, and tell it to re-send one of the previously received values.
Then over in the Cloud Functions logs, I can see the log message from a successful run!
Posted on January 1, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
January 1, 2021