Coming from: When Is a Test Case Ready for Test Automation
The next question is, how can we assess when and how often something is going to break? This is a tough question to answer and it heavily depends on your specific situation. Luckily, smart people have done some work for us already!
For instance, this paper discusses the most common ways in which automated tests of web applications tend to break (while they mostly talk about record/replay tests, the same principles apply to hand-crafted testing scripts). The authors created suites of automated tests for early versions of five different open source web applications and then executed these tests for each subsequent version. Every time a test broke, they recorded the breakage reason, fixed the test, and continued testing. This way, they created a very unique dataset, which makes it possible to explore the most common failure reasons.
Broadly, they describe the five main types of test breakages. Thinking about these potential breakage risks should give you some idea which areas of your applications might be suitable for test automation, and which are better served by manual QA.
When your test script validates something, it usually needs to find and inspect a DOM element, whether by its attribute, text, or place in the hierarchy. Each of these ways can break: changing page styles, updating copy, or restructuring your layout can all threaten your ability to find the same thing reliably. This is by far the most common cause of breaking automated tests, and one that is very difficult to mitigate. This is one of the reasons why humans are still much more robust testers than machines!
When testing text input fields, you often want to make sure they accept the correct values and raise appropriate errors when the provided values are invalid. Whenever you change your backend, e.g. the requirements for passwords when testing a password input field, you need to update your test to prevent it from breaking. Similarly, if you add a new, required input field, your old automation script will try to submit a form with missing data unless you update it.
Another common case is asserting that a specific error message appeared and then updating the error message, which will also break the script.
Depending on your backend, going through some user flows might require page reloads. On the other hand, reloading a page at the wrong time can often break the test! Changes to your application can often result in broken reload login in test scripts, because of: 1) reloads being required at different points; 2) reloads no longer happening (while the script is still expecting them); 3) reloads potentially requiring different amounts of time than previously (since explicit “sleep” times are often used to allow the page to reload before continuing, changing the logic behind the reload might render the original wait time insufficient).
Many applications implement logic to log out inactive users after a set amount of time and use test scripts that are intentionally inactive for a while and validate that the expected action (warning and/or logging the user out) happens when expected. Such tests are likely to break whenever the allowed session time changes.
Pop-ups, whether windows or alerts, are often used in web applications and need to be tested. It’s therefore common for test scripts to assert either the presence or absence of pop-ups, which will break whenever the application is updated to change the relevant behavior. While the paper authors have found this to be one of the least frequent reasons for broken tests, here at Rainforest we have seen our customers being affected by this particular issue often enough to develop a specific set of rules for testers to follow when testing pop-up behavior.
THE TAKEAWAY: LEVERAGE TEST AUTOMATION STRATEGICALLY
Test automation is great for development — it not only gives you a peace of mind about being able to serve your customers well and without hiccups, but also allows you to move at a much faster pace because your team spends less time dealing with QA overhead and the consequences of bugs in production.
When done well it can truly supercharge your development efforts, but it is not without pitfalls. Thinking carefully about your QA strategy and about which parts of your test suite can be automated will definitely pay off.
By: MACIEJ GRYKA
LEAD SCIENTIST, RAINFOREST QA