After helping out with test automation projects for a number of different organizations, I have identified several common pitfalls of test automation projects that I often see.
Avoiding making these common mistakes will certainly accelerate the project momentum and make the automation experience more rewarding. The first two suggestions address the managerial pitfalls while the other two address the technical pitfalls.
Always treat test automation projects as development projects
Often, test automation is seen as “nice-to-have”, so it does not receive the attention required to take the fullest advantage of it. Therefore, test automation project often slip and the company never enjoys the benefits of test automation.
In some cases, the team even becomes frustrated with test automation and hesitates to invest in test automation again.
To make your test automation project effective, always approach test automation projects as development projects.
- Assign a product owner or project manager
- Depending on the development methodology your company deploys, come up with a project plan or project milestones just like you would for any other development project.
- Dedicate at least one to two people to the test automation team. Since test automation is also a kind of development project, ideally hire or arrange someone who has development experiences to be the lead programmer for the project.
- Create a design of the test framework or test architecture before coding your automated tests. Without a proper design, the development process and maintenance will be a pain.
Have the product development team collaborate on test automation
In every organization that I have helped with, the test automation team is separated from the product development team, and collaboration between the teams is minimal or non-existant. Test automation should be a joint effort of the product development team and the test automation team.
This helps improve the software’s testability and reduce test automation overhead thus easing the architecture design and future maintenance.
Invest time in result reporting
When we talk about test automation, most people focus on the test execution and miss the result analysis and reporting part. Test automation is not just about performing the test steps; implementing a good result reporting component helps you find bugs more effectively. Just having “success” and “fail” as the report messages does not help diagnose the actual problems too much, and testers need to repeat the tests manually to find the bugs.
In addition, create a scale of error severity so that the team can easily identify critical and major bugs. This can also help the automation tool to determine if the error is severe enough to terminate the test. For example, in cases where data correctness is not critical, even if data validation errors are detected by the tool, the test should continue until it hits a critical bug.
Implement test set up and tear down procedures
Test set up and tear down procedures are usually missed out for test automation. People tend to forget about them because when we execute the tests manually, the set up and tear down are so seamless and subtle that we don’t see a need to implement them for automation.
But in fact, not implementing the automation code to handle set up and tear down will have greater impacts than most people expect. For example, the browser cache may preserve the state of the website under test and, without clearing the cache, you may get incorrect results back. Another example, if there are multiple instances of the same application under test running on the same machine, the tool may have troubles identifying which one to run tests with.