This post is not about testing techniques. There is plenty of information about this topic out there, and I feel that repeating the same things would be redundant. If you want to learn how to write automated tests properly, go and read Uncle Bob, Martin Fowler, Kent Beck, Sandro Mancuso and some others. They’ve been testing for decades and share their knowledge in books, videos and articles. Go and read/watch them!
So… what is the point?
If I don’t want to bore you with theory and code, what is this post about then? I thought it would be more interesting to share my own experience writing (and not writing) automated tests, as this is about real cases and it may be a motivation for some of the ones who aren’t convinced yet. Some of them won’t be ever convinced, but we must try anyway and get more people into the professionalism path; even if it’s just one person, that will make your profession and our world better.
Did you say manual testing is quicker?
I once worked on a project that took about 1 day of focused work to be manually tested (and not even all the possible paths, that would have taken several days). Imagine that project had 0 tests and the code was also a big mess. How could you make any changes with confidence to fix that mess? Would you manually test a module for hours just because there’s nothing that does it for you in milliseconds or seconds? How would you add new features and sleep well if you don’t know whether that new code has broken existing functionality without spending hours of days testing by hand?
I wish I had had the knowledge to write at least some acceptance tests to get some confidence when I was working on this project, instead of spending hours clicking with the mouse and being worried about fixing that horrible code. The best thing I could archive at that moment was to write some “tests” on a Main class, with if “assertions” (God save us..) and log the failures on a critical functionality. No JUnit or similar, but still better than nothing. Testing was already trying to get out of my head somehow, even if I didn’t know the right tools yet.
Even the smallest thing can make the difference
On a recent project, I had to refactor a prototype with no structure at all. The functionality consisted of reading a JSON file, perform some updates and display the data, but the business logic needed to be pulled out, and also the persistence of new objects was not implemented. I decided to write some integration tests (one at a time) before starting the refactoring to create the persistence layer. Just 7 tests on a single class (the one that was going to encapsulate the persistence for that kind of objects) helped me catch bugs, not only during the refactoring phase, bug also when adding the missing persistence later.
Even testing a throw-away persistence class saved me the day, and that throw-away implementation lasted for months (and speeded up the development) until we finally needed to switch to a proper database. Writing those few integration tests definitely saved me more time (and pain) than debugging.
So here we have an interesting dilema: my persistence class was an in-disk implementation and was supposed to be replaced later; would you had not tested it just because you knew you were going to delete it in a few months? I hope not, I think that would have been a big mistake in form of bugs and debugging time.
The synchronization hell
One of my favorites projects was an Android application that allowed users create and modify content offline, and then synchronize with the server. Maybe that server has new updates on the same entities (you could open a session on a different device and update the content), so versioning was required. Also, if you are using a database (SQLite on Android, which is relational), you need to keep track of the database primary keys and also update, in some cases, the external keys created by the server, which are in fact the ones that matter in terms of domain. So you need to search sometimes by the primary key, and some times by the domain key. Putting all together, this is always a bit complex, or at least requires developers to be careful and methodical.
We implemented a good architecture, which encapsulated the data and the synchronization party, but that only isolated the complexity of a functionality that needs a special care from developers to get it right. The deadline was more than tight and we “didn’t have the time” to investigate how to write integration tests on Android. I wrote some unit tests for the business logic, but most of the problems were in the data layer. The result? HOURS AND HOURS of three developers debugging the bugs (did you say writing the tests was expensive?), sometimes even making queries to the database from the debugger (at least we where using an ORM and didn’t have to write SQL…). The worse part wasn’t that, but seeing old bugs coming back when making more changes to fix or add other features. The feeling was a never-ending loop where the damn bugs in the data layer never went away. That simply drained out our energy.
I take this personally, as I think it was my fault not to draw a red line and do the work needed to be done in order to build the functionality incrementally and safely. I failed to force myself and the team to step back and rely on engineering instead of on rushing and become the heroes to save maybe 1 day in an unrealistic schedule. I’m 100% sure we all would have saved more time (I’m going to leave the stress out of the game) than the time we spent swearing and using the debugger. I still regret, but I learned the lesson.
Device fragmentation
Manual testing is really expensive specially when you have to test on multiple devices. I’m talking about Android (which suffers the fragmentation problem), although this post is not focused on a particular platform, but it’s a clear and common case nowadays. You need to check that the integration with the framework works, as sometimes the same code doesn’t work on different devices.
Let’s try some maths on Android:
A project takes 16 hours to be fully tested by hand. The number of Android versions you want to support is 8 (from API 16 to API 23, which is quite common). That makes 16 * 8 = 128 hours. And this testing needs to be done frequently, at least once every release (this is quite low in my opinion, but to simplify the example), so multiply 128 hours by the number of releases per year, and also by the number of years you expect the product to be alive. This could be even more complicated if you add RAM memory constraints to the equation. And, not matter what requirements and constraints you have to apply to calculate the effort, that number is going to grow as you add more and more features. Do you still think that manual testing is cheaper than writing integration tests?
This example could be extrapolated to Web Apps, where different browsers could run JavaScript in different ways, or other kind of platforms. I once worked at a company that started to hire more Support and QA people instead of hiring more developers to increase the test coverage. The result was more bugs, more customers complaining and more money spent on salaries for Support people to lead with customer complains and QA people to do manual test. And this was a never-ending loop. Does this really make any sense?
Your manager is not your priest
Unless they were really good developers in the past and understand what are you talking about, don’t tell them that you write automated tests. In fact, don’t tell them anything about the inners of your work. The more you tell them, the more likely to think that you are wasting time doing things “too well” if they don’t have a good criteria and knowledge about Programming. They care about quality, budget and schedule, but don’t realize that good and auto-tested code helps to archive this in the medium and long term. Most of them won’t understand that, many times, in order to go from A to B, you need to stop at C for a moment. Comparing this to climbing, is like falling off when rushing, instead of making safe little steps that don’t make you step back or die. Closing our eyes won’t make the problem disappear and the product won’t be perfectly finished on time by the time we pull our heads out of the sand if we don’t do things properly.
We developers must do the work that has to be done in order to make a project succeed, and managers must trust us. If they don’t, try to convince them converting your reasons into numbers and money, so we all speak in the same language. It this still doesn’t work, there is plenty of companies with competent managers screaming for good developers, don’t waste your time.
Professionalism
Every time QA finds a bug, a kitten dies and you should feel bad. As human beings, we make mistakes and that’s ok, but we must do the best we can to minimize them. When there is a bug, it is our fault and there is no excuse if we didn’t try as hard as we could to avoid it.
Some people believe in filling the room with QA guys to “make sure” the project has no bugs. When you start a project and don’t write tests, the first beta is just a big pile of bugs. You can hire some more QA people, or an army of monkeys if you prefer, and maybe the first release will have some quality, but what comes next? A second (or a third, or put here the number you want) release built on top of a codebase with a lot of technical debt and no confidence for the programmers to fix it or add new features. The project will eventually collapse. We’ve seen this in the past with so many projects. Let me insist: this will happen. I’ve seen medium sized codebases explote because the code was a mess and had no tests at all, so no chance to refactor it safely; and I’ve seen larger codebases stay there for long time. The difference? Automated tests and the developer’s care. Not having this into account only means one thing: the people responsible just don’t care about it, forcing developers or other developers to rush and make the pile bigger. This is totally unprofessional.
Maybe you are trying to convince yourself saying “but I test by hand all my code!”. No, you don’t. Do you test all the error handling? How can you be sure that your error handling works when an external server fails? Maybe by hardcoding error responses before going to the UI and seeing that nice error message? What if you forget to remove it before shipping? When you make a change around, how do you know that you didn’t break it? Do you hardcode errors AGAIN and remove them AGAIN? And what about the multiple variants? Do you test them all every time you change a line of code in that module? And this is only one example, there is plenty of cases that cannot be tested by hand in 30 seconds (and this sounds already expensive to me), so nothing can make sure that it still works when changing the code. Do you want an automated tool to check it for you in seconds or minutes, or are you planning changing careers and becoming a QA tester?
Happiness and unicorns?
Not everything about automated testing is nice, there are also sad stories. Writing automated tests is a discipline that requires years of practice and a lot of will to get acceptibly right. To me, it’s even more difficult than writing good production code, as the same principles apply for both but also test code is really easy to be messed up. At some point, some tests can be really coupled to the implementation and make the maintenance difficult. This happens less and less these days to me, as I get more and more experience, but it’s quite common in our first steps. You also have to fight with testing frameworks, specially when testing UI, Databases, Network, etc. Don’t let this put you off. Did you give up writing production code because it was difficult to do it right on your first months? The same thing applies for test code.
Conclusion
The old days of cowboy programming are getting to the end. Movements like Software Craftmanship are opening many eyes. Other engineering disciplines use automated testing since decades ago. It’s time for software developers to stop being too smarts/braves and do things properly. Untested code will be more difficult to write and extend, and a nightmare to refactor. Previous experiences during the last few decades say that complex products without a good test coverage are sentenced to death, sooner than later.
Please, stop writing untested code.