I was talking with Mark McSweeny the other day about a test design I was thinking about. It involved attempting to automate a process that currently relies on a lot of manual testing involving visual inspection. The tricky part of automating this kind of testing is the potential for variation in the items under test. Variation is hard for a computer to handle, but a human who understands the context can instantly spot the variation and see whether it is permissible, or whether it constitutes a test failure. I was asking Mark’s advice on how to deal with the variation, and mentioned the tools I was thinking of using. Mark pointed out that I was overcomplicating the test design by thinking about what data structures, xUnit tools and libraries to use, and not thinking enough about the human tester.
He described “fast failure” tests he writes that are pure change detectors, designed to run quickly. While they can provide feedback very quickly, they don’t provide a lot of information other than “Pass” or “Fail”. When a test fails, the human steps in and does the manual inspection. In many cases, the human can tell at a glance if the failure is a bug or not. If the change is ok, and recurring, the test gets changed. If it isn’t, the human recognizes the bug, and logs it, or just adds a new unit test and fixes the problem in their own code.
Fast failure tests have a lot of potential as a testing tool. What Mark described is something I’ve written about before: Computer Assisted Testing. After all, I’m part of a school that believes testing is an intellectual craft, and skilled tester activities cannot be automated. An issue many have heard me rant about before is that until we can program inference, and have some sort of intelligent, thinking bots doing testing, we can’t automate all the tests a human tester can do. Humans handle variation easily, and respond to change. Fast failure tests yield the best of both worlds. The computer does what it is good at, and the human exploits the computer to help them concentrate on what they are good at. “So why didn’t you think of this solution Jonathan?” you might ask. I guess I got caught up in the technology instead of thinking about a solution that harnesses both the computer and the skills of a human. I forgot to practice what I preach. This also demonstrates how brilliant people like Mark design simple, elegant solutions, and teach me something every time I talk to them.
There are some potential benefits for fast failure tests, even if they don’t completely emulate what a human does. For example, say a tester must manually inspect ten items every build. We develop some fast failure tests that do a rough approximation of that inspection, and now run these automated tests every build. With the fast failure tests, say that two of those ten items under test report failures because of a variation. The tester inspects the two files manually, and using their judgement and skill realizes that these are legal variations. Also, say that in one of five builds, the fast failure tests fail on one of the items under test because of a bug that can be reported. Instead of the burden being completely on the tester to inspect each item every build, they now have to inspect far fewer items after each build. Even though they may get a couple of red herrings each build due to variation, the tests prove their worth by helping the tester quickly identify potential problems. This test design shows how a computer helps the tester work more efficiently.
If there is a need to have a more complex test that can provide more information about the failures, we can develop it over time. In the mean time, we have a solution that is good enough, and we can use it as a base line for the test under development. However, a complex test automation solution can be dangerous. As Mark warned, test automation is software development, so it is just as prone to bugs, design problems, maintenance issues etc. as any other software. A simple solution that requires some human intervention may be more efficient than a complex one that requires a lot of time spent in the automation code to keep it working.
I’m glad I have smart people like Mark around to talk to, who challenge my ideas and let me know when I’m off the mark.