There is often debate about test automation versus manual testing. When I think about testing, I look at an application in 3 broad layers: the code (on the machine side), the system (where the finished software lives), and the visible layer, or how the software is used from an end user’s perspective. I often call this visible layer the social context because of the environment much end-user software is used in. When we spend a lot of time in one context, testing starts to specialize because we concentrate on part of the picture.
When we view an application from the source code view, the testing is dominated by automation. When we look at the system context (as some of my operations friends do), testing involves integration in a system, and testing hardware, firmware, drivers, etc. to make sure the software gets served up correctly. Automated testing tends to get more complex the more we move from the code to the user interface. Attempting to emulate user actions is difficult, and high-volume automated functional tests can involve massive amounts of automated test code. This can be problematic to maintain. Sometimes functional tests become so complex they involve as much test code as the software they are testing has to serve up a component, plus test data generation code, as well as the code that attempts to emulate user actions.
Personally, I agree with Cem Kaner and call automated testing “computer-assisted testing”. The computer is a tool I use in conjunction with good manual testing. Until machines are intelligent, we can’t really automate testing. We can automate some aspects of testing to help maximize problem-solving efficiency.
Traditionally, software testers tend to have a handle on the social context, or how the software is used in a business context. As a result, much conventional testing is focused on the visible layer of the application. I tend to prefer testing at various layers in an application. I value testing components in isolation as well as testing software within a system. There are advantages and drawbacks to both. While I value isolation, testing, particularly brain-engaged manual testing at a visible UI layer has merit. Sometimes testers focus a great deal on testing in this context when component isolation might be more efficient. Often, traditional testers hope for an automated testing tool that can do the work of a tester. I’ve yet to see this occur successfully, but there are still tasks that can be automated to help testing efforts that are a big help.
Frequently there are bugs that are difficult to track down that crop up in the visible layer of the application. These are due to the visible application at runtime becoming greater than the sum of its parts. There is a kind of chaos theory situation that occurs due to the application being used in a way that the underlying code may not be designed to handle. By the time a minor fault at the code level bubbles up to the UI, it may have rippled through the application causing a catasrophic failure. Unfortunately, these kinds of usage-driven faults are problems automated tests at various layers do not tend to catch. Often it is some sort of strange timing issue as I note in this article. Other times, it’s due to actions undertaken in a social environment by an unpredictable, cognitive human. These variable actions motivated by inductive reasoning, driven by tacit knowledge are difficult to repeat by others.
Focusing too much on one testable interface in an application can skew our view. If we view the application by the code most of the time, we have a much different picture than if we view it through the UI. Try flipping the model of an application on it’s side instead of viewing it bottom up(from the code), or top down(from the ui). You may discover new areas to test that require different techniques, some of which are great candidates for automation, while others require manual, human testing.