E2E Test Automation with Autify
koki kitamura
Posted on August 11, 2021
The need for software testing
Software is used in various places around us, such as smartphones, personal computers, home appliances, and automobiles. Software has become an integral part of our lives, and we take it for granted that it works properly. Therefore, if the software does not work properly, you risk losing the credit, time, and money of the company that developed the software.
The need for automation
Software testing is the act of actually operating the implemented software and detecting defects. It is necessary to operate the software under all conditions in order to detect all defects, but it is difficult to verify all conditions because human resources and time resources are limited in the actual. Against this background, the need for software test automation is increasing.
Software test type
The tests are divided into unit tests, integration tests, and E2E test(system tests).
Unit tests are tests on the proguram components(functions, classes, modules, packages, etc.). Usually the developer tests it. Integration tests are tests on system components (API server, microvis, front). Usually tested by the developer. System testing is performed by operating the system in the same way as an actual user. Developers also check the operation, but it is common for a QA engineer other than the developer to guarantee the complex test design and system quality.
Unit testing and integration testing are automated by many teams, with test code running in CI / CD. On the other hand, although there are tools for automating system tests (Selenium, Puppetter, Cypress, etc.), few teams should have continuous system test automation working well.
Test | Team |
---|---|
Unit Test | Developer |
Integration Test | Developer |
System Test | QA Enginner |
System test test case
Many teams have a QA team separate from the engineer team, and engineers should not design test cases and perform system tests. For engineers, the image of a system test often has only a vague image of manipulating the UI and touching the functions to verify the validity of the system. Each screen decides various input value patterns one by one and carefully verifies whether the operation result is as expected.
Test cases to automate
The role of automated system testing is not to guarantee the quality of new features, but to check the degradation of existing features. Since the purpose of automation is to reduce man-hours, it is necessary to automate test cases that reduce man-hours by automation. Test cases that automate and reduce man-hours must satisfy the following inequalities:
(Manual test cost per time) x (The number of implementations) > (The cost of starting automation) + (Maintenance costs)
The (left side)-(right side) of this formula is the reduction man-hours. Other than the number of implementations, the difference is small depending on the test case, so it is regarded as a constant. In other words, man-hour reduction depends only on the number of tests performed. Testing new features will be focused before the feature is released, but less often after that. Regression testing is performed regularly at the time of release, which is expected to reduce man-hours. In the case of agile development, the cycle of development → test → release is repeated, so it has a high affinity with system test automation. System testing does not automate everything, at most 30% of all test cases. Manual human testing is not gone.
In addition to regression testing, the following may be considered for automation: Let's automate little by little while considering whether it will lead to reduction of man-hours.
- Intensive testing of the core of the product
- Wide and shallow test of basic functions
- Testing for unstable and buggy features
- Tests that could not be carried out due to insufficient man-hours
Tools used to automate system testing
There are many tools for automating system tests, but we use Saas Autify, which makes it easy for non-engineers to automate and maintain test cases.
Autify can create test cases by automatically recording browser operations without the need to write scripts. Chrome Extension is used to record browser operations. Entering the url on the recording screen will open a new window for recording.
You can save the test case by performing the operation you want to test in a new window and pressing the save button at the bottom left.
Autify has four concepts.
- Scenario
- Step group
- Test plan
- Test Result
A scenario is the smallest unit of test execution in one test case. One scenario is to log in to the application, post an image, send a message to another user, and so on. The scenario consists of multiple steps. The scenario is divided into each step by actions such as pressing a button and URL transition.
Step groups can share some of the scenarios. However, there is a restriction that it is only the beginning of the scenario.
Test plans allow you to group scenarios and run multiple tests. You can also run the test plan in multiple execution environments. You can also run the test plan on a regular basis.
The test result is the execution result of the scenario or test plan. You can see where it failed.
Works with CI / CD
Suppose the release is in a workflow named Deploy in Github Actions. Make sure that the system test workflow starts when the Deploy workflow ends. I want to see the test results and let slack know if it succeeded or failed, so I need to make sure that Autify's test plan is complete. Use autify-cli as the CLI for that. There is also a notification function of Autify's slack, but I made it notification in Github Actions so that it can be divided for each test plan.
name: e2e-test
on:
workflow_run:
workflows:
- Deploy
types:
- completed
jobs:
autify:
name: E2E Test
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [12.x]
if: ${{ github.event.workflow_run.conclusion == 'success' }}
steps:
- name: Check out source code
uses: actions/checkout@v1
- name: Install Autify CLI
run: |
curl -LSfs https://raw.githubusercontent.com/koukikitamura/autify-cli/main/scripts/install.sh | \
sudo sh -s -- \
--git koukikitamura/autify-cli \
--target autify-cli_linux_x86_64 \
--to /usr/local/bin
- name: Run Autify
run: |
response=$(atf run --project-id=${AUTIFY_PROJECT_ID} --plan-id=${AUTIFY_TEST_PLAN_ID} --spinner=false)
test_result_id=$(jq '.id' <<< "${response}")
status=$(jq -r '.status' <<< "${response}")
echo "TEST_RESULT_ID=${test_result_id}" >> $GITHUB_ENV
echo "TEST_RESULT_STATUS=${status}" >> $GITHUB_ENV
env:
AUTIFY_PERSONAL_ACCESS_TOKEN: ${{ secrets.AUTIFY_PERSONAL_ACCESS_TOKEN }}
AUTIFY_PROJECT_ID: ${{ secrets.AUTIFY_PROJECT_ID }}
AUTIFY_TEST_PLAN_ID: 99999
- name: Notify Slack
run: |
.github/workflows/notify_slack.sh "E2E Test has finished. Result is ${TEST_RESULT_STATUS}. \`https://app.autify.com/projects/${AUTIFY_PROJECT_ID}/results/${TEST_RESULT_ID}\`"
env:
AUTIFY_PROJECT_ID: ${{ secrets.AUTIFY_PROJECT_ID }}
Cases where system test automation fails
Be aware of the following typical cases where system test automation fails.
- Automators don't know about test technology
- Automated parts that do not reduce man-hours
- Large man-hours for automatic test maintenance
- The purpose is to automate
Automated testing is aimed at confirming regression, and manual testing for quality assurance is never gone.
Posted on August 11, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.