I think everyone would agree that tests are important part of software development lifecycle. As of today, the vast majority of software teams are doing them - sometimes in the form of manual validation exclusively, sometimes in the form of automated tests suite and usually it’s a hybrid of the two. We do it, as we all see value in ensuring that whathever we push to production passes through some kind of safety net. After all, nobody wants to stay late hours because of a broken release.
While most of us has some set of unit, integration or e2e tests in place, most projects still rely on heavy manual verification before hitting production. And manual verification is slow and error prone, especially when the system is “mature” enough. This not only slows down delivery making stakeholders unhappy, but also lets more bugs slip through and in my opinion it has terrible influence on the code quality overall.
Lack of automated tests, or them being too weak discourages people from refactoring - if the feedback on whether the application works is slow and vague, it carries risk and developers naturally want to avoid making unnecessary changes to such system. Then, not refactoring the code introduces more and more entropy and accidental complexity making the whole process even harder and longer. This of course heavily impacts overall quality of the solution we are building.
So, how can we break the cycle? Why can’t we have proper automation tests suite in place ?
1. Manual/automation tests dichotomy
If you have some test management tool where QAs write they test plans, like testrail or zephyr, take a look at it and compare it with what you have covered by your unit/integration tests. If you are like most teams, you’ll notice that what you have there and what is in your automation tests suite are quite different things. That’s because a developer perspective on quality of sotware is vastly different from that of a QA engineer or a stakeholder.
Developers like to think about quality in terms of code quality. We write unit tests to ensure our classes are loosely coupled and follow the best practices. TDD really helps there and I defienietly see value in wrting unit test. Then, we write integration tests to check whether these units we created can work together. However, these tests are often too tied to particular implementation, focusing on things like http status codes, IoC registration or veryfying whether “the mediator”/observer works. You get my point, most of them are “technical” tests.
Don’t get me wrong, it’s good to have such tests. Still, they don’t guarantee anything in terms of checking whether the business process works correctly.
If you are facing this, take a look at the test plans if you have any, or write them down on a piece of paper yourself in case you don’t. Then, think a bit on how you could automate these using the most lightweight tool available. Probabbly, you can refactor the code a bit to decouple business logic from implementation details and automate some using unit tests. Then, to ensure the API works as a whole, you can probabbly write some integration tests.
Make the tests a good investment, so they point out broken pieces quickly. If you have a bug, first ensure you have it covered with automation so nobody needs to check it again. And ensure your tests are through and reliable, so you can trust them. You should check the logic/algorithm itself with a unit tests, but you need to have integration test in place as well, so you check whether e.g. the routes are registered correctly and the necessary components are resolved correctly from IoC container.
If you keep working towards breaking the dichotomy, the team will develop trust in the test suite. And you’ll rely less and less on manual verification, giving an exponential boost to the quality - the more automated your tests are, the easier and less risky it is to refactor the code leading to better quality.
2. E2e test everything
To ensure critical business process keep working as expected, some teams took the path of e2e testing everything, usually in the form of browser Selenium tests. So, they setup environment and interact with the application in the same way user would - by clicking in UI.
While this sounds great in theory, there are significant problems with this approach of practical nature:
- Such tests are slow. And by slow, I mean really slow. It’s quite common to see Selenium tests executing for hours. Because of this, they don’t provide feedback fast enough preventing refactoring initiatives.
- Dealing with animations is a plain nightmare. Wanted to click a button ? You cannot, because the outer element where the button is placed hasn’t finished animating yet. So, you make the test wait a second or two to make it pass, but then you run it on a slower machine and it just failed again.
- They are fragile. Change one element in the DOM, and you can easily make half of the tests fail
- Isolation is hard to achieve, as it requires the whole environment to be set up in order to execute. Your test wanted to add a user, but the previous run didn’t execute cleanup properly and the user was already there ? Your 2 hours long test run just failed because of false negative.
So, why there can be a really useful tool for e.g. smoke testing, you shouldn’t overuse it. Maybe you can test the same thing with an integration test ? Or even unit test?
3. Tests failing at random
Who hasn’t seen an automated tests which works 9 out of 10 times ? And what do you do ? You run it again, until it passes, to get your changes through the pipeline and finally merge them to master. It doesn’t seem like a big issue either, as the they mostly pass.
However, if you do it again and again, you’ll get used to it and develop toleration for failing tests. Your team will simply get used to the fact, that tests sometimes fail for no reason. And if something really breaks, you might just skip it.
Actually, there is a scientific theory standing behind this called “Broken windows theory”. Once somebody breaks a window, it’s easier for the next guy to brake another one .. and another one .. up to the point, where nobody cares anymore.
Fix those tests, or remove them. If you just leave them failing at random, not only will they provide no value but they might spread bad practices in the team.
In this article, I wanted to stress out the importance of having proper test automation in place. In my opinion, it is a crucial part of any successful software project. Without them, it’s impossibile to keep the code maintainable.
Also, while unit tests are great for helping to write loosely coupled code, we should also pay attention to whether we verify what’s important business-wise. Write test plans to ensure business processes work as expected and then think on how could you automate these checks. If possibile, avoid sluggish e2e tests favouring faster options instead.
Implementing proper test automation will be tough at fist for your team, but it’s an investment that pays off really quickly.