One of the items on my engineer growth bucket list, so to speak, was implementing an automated test suite and this past week I had the opportunity to cross that item off the list by using GitHub Actions.
First off, GitHub Actions are basically tools for integrating some kind of tool or process with GitHub. In my case, the goal was to run a test suite every time a member of our repo pushed to the main branch or opened a pull request.
If you aren’t familiar with this kind of automation, the reason for it is pretty simple. As programmers, we introduce bugs in the code pretty easily. It’s a painful truth but we can’t ignore it.
So the best remedy is to try to be as preventative as possible. This means writing tests along with any new code that is going to be introduced. This provides the obvious benefit of sanity testing your code and confirming it works as intended. In addition, these accumulated tests serve as regression testing to make sure that as new code is introduced, existing functionality is not broken.
Ok now to wrangle us back to the point of discussion here which is why use automation? Unfortunately, even with the best of test suites, it can be a pain to run on your local machine, time consuming, and very easy to forget. That’s where the automation comes in handy.
When test suites are associated to an inevitable stage of the version control process like opening a pull request, the engineering team can have more faith that new code does not break any prior features/functionality.
A cherry on top is to add a rule to the repository that branches cannot be merged to main unless the tests all pass. That way, even a lazy/maleficent programmer who ignores the test failures couldn’t actually merge their buggy code.
I can’t take all the credit for the implementation of the new test.yml
file I added though because I got most of that information from this informative article:
While the articles goes into the use of a 3rd party tool that can display the test results in a pretty format, I didn’t include that portion of code in our repo because it felt unnecessary. After all, this codebase will very likely not be added to or maintained beyond this course so the test suite coverage is more about sanity testing and getting practice writing tests more than anything else.
The end result, after tinkering with the flutter and dart versioning came out like this
name: Flutter_tests
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
tests:
runs-on: ubuntu-latest
steps:
- name: Checkout the code
uses: actions/checkout@v2
- name: Install and set Flutter version
uses: subosito/flutter-action@v2.4.0
with:
flutter-version: "2.10.3"
dart-version: ">=2.16.1 <3.0.0"
- name: Restore packages
run: flutter pub get
- name: Analyze
run: flutter analyze
- name: Run tests
run: flutter test
An added benefit of this script is that in addition to running the tests using flutter test
, it’s also going to check for lint errors through flutter analyze
. I honestly hadn’t heard of that command before but it feels like a great thing to have considering how easy it is for the codebase to end up full of type warnings.
Now as the team builds up a small test suite, we can have confidence that our tests will always run with each addition of new code.