Better Automated Testing
Where I discuss FED automated tests and a consistent method to implement them
I’m a stickler for automated testing and being consistent in writing good code. When I work in a company environment I always imbue good testing practices and habits in the colleagues around me.
Part of my rhetoric is to define project specific automated tests that are configurable.
There are a plethora of tests you may want to do in a project, my tests stub starts with the following:
- Mocha or whatever your preferred test coding framework is.
- Istanbul for code coverage tests.
- Sass coding style checks, stylelint is my go to tool.
- HTML validation tests with Valimate
- Broken Link Checker.
- Accessibility Checks with Pa11y
- Screenshot software, I like Pageres.
- Textlint to check for typos and writing style.
I create two bash scripts. One that runs through the tests and one to create screenshots of the site at different sizes.
Screenshots form a focal discussion point with designers, developers and usability people.
Screenshots are helpful for circulating to non-technical stakeholders in the business. This allows stakeholders to see the outcome of new features and prompts them for feedback.
Run the test script through Jenkins (or your preferred alternative). I also run linting and validation tests hooked up to a task runner, sharing the same configuration rules. This process ensures simple coding style check results get to the coder at the right time. In case a rebel developer doesn’t play by the rules, Jenkins would catch them later.
In environments where companies don’t use a build server. Add the script to a git hook.
Here is a sample script. I will step through some of the important points.
This is how to define an array in a bash script. A useful technique to create a single point of truth for common objects to test.
I define key pages to test at the start of the script, for accessibility testing and broken link checks.
When I start Mocha I use an npm run script. The test command code is in my package.json file. The more site specific code we can move to one place, such as the package.json file, the better. The test script will be more portable which is always a good thing.
This is a snippet from a package.json file. Running
npm run test will call the Istanbul script and run mocha.
The broken link checker is inside a loop. Assign each item to a variable and pass it in to the URL.
Loop through the URLs. Save each report in to a folder named “accessibility”. Named the URL.
Prevent the process running after the tests have completed.
I run text linting in a separate script. Writing is subjective so it shouldn’t fail the build.
Running bash scripts such as this will catch fixable problems. Leaving you free to ponder over the trickier issues that need attention.