Bret Pettichord has posted his Homebrew Test Automation course notes. If you are thinking at all about test automation on a software project, read these notes. You will get a lot from them. Bret delivers the straight goods, and I respect him for that. My own experiences (though not nearly as extensive as Bret’s) confirm what he says.
All posts by jonathankohl
The Kick in the Discovery
“Why do you like software testing?” is a question that I get asked frequently. A phrase from Richard Feynman comes to mind. When Feynman was asked about how he felt about the reward of his Nobel Prize, he said one of the real rewards of the work he did was “the kick in the discovery.”1 This has stuck with me. As a software tester, I enjoy discovering bugs. I seem to be one of those people who enjoys seeing how a system works when stressed to its limits. I get a kick out of discovering something new in a system, or being one of the first people to use a new system. Scientists like Feynman fascinate me, and a lot of what they say resonates with my thoughts on testing. Software testing can learn a lot from scientific theory; the parallels are very interesting.
Exploratory Testing: Exploring Unintended Test Results
Many of the great scientific discoveries have come about by accident during a typical scientific process of conjecture and refutation. A controlled experiment is a movement towards proving the fallibility of a hypothesis. However, when an experiment has unintended consequences, some scientists do a great job handling things that don’t go according to plan. This often leads to great discoveries.
One parallel to software testing is Ernest Rutherford and his work in nuclear physics. (Someone who spent time “smashing atoms” sounds like someone who might be good at software testing.) Rutherford observed unintended consequences, built on other work and collaborated with peers. He saw unintended consequences in experiments done by his peers, and developed patterns of thought around what was being observed.
The work that eventually led to the discovery of the nucleus of an atom is an interesting topic for software testers to study. If those doing the experiment to push alpha particles through the gold foil had executed the experiment like routine-school testers, and had scripted the test case of firing the particles through the gold foil, and the expected results of all the particles having to go through the foil, would they have noticed the ones that bounced back? If so, what would they have done if the particles that bounced back weren’t in the plan, and weren’t in the experiment test script’s expected results? What if they were so focused on the particles that were supposed to go through the foil that they didn’t notice the ones that did not?
Not knowing exactly how much Rutherford and his colleagues had formalized the experiment, I can’t make any claims on exactly what they did. However, we see the results of the way Rutherford thought about scientific experimentation. What he and his colleagues observed changed the initial hypothesis, and subsequent experiments led to discovering the nucleus of the atom. They had an idea, tested it out, got unintended results and Rutherford explored around those unintended results. He found something new that contradicted conventional knowledge, but transformed the face of modern physics. Like a good exploratory tester, we might infer that Rutherford was more concerned with thinking about what he was doing than in following a formula by rote to prove his hypothesis.
Software testers don’t make discoveries that transform scientific knowledge, but the discoveries that are made can transform project knowledge. At the very least, these discoveries potentially save companies a lot of money. Bug discoveries are hard to measure, but every high-impact bug that is discovered and fixed prior to shipping saves the vendor money. Discoveries of high-impact bugs may be minimized by the team at first, but many times those discoveries are the difference between project success and failure.
James Bach and Cem Kaner say that exploratory testing is a way of thinking about testing. Exploratory testing, like scientific experimentation allows for improvisation and for the exploration of unintended results. Those unintended results are where the real discoveries lie many times in science, and where the bugs often lie in software testing. Detailed test plans and pre-scripted test cases based on limited knowledge may discourage discovery. Tim Van Tongeren and others have done work researching directed observation and the weaknesses associated with it.
One way of thinking about exploratory testing is to see it as a way of observing unintended consequences, exploring the possibilities, forming a hypothesis or theory of the results, and experimenting again to see if the new theory works under certain circumstances. This cycle continues, and testing is as much about making new discoveries as it is confirming intended behaviours . Pre-scripting steps and intended consequences can discourage observing these unintended consequences in the first place. A hypothesis, or testing mission or test case is fine to detail prior to testing, but slavishly sticking to pre-scripted results can stifle discovery.
I have had some testers call exploratory testing “unscientific”. A good scientific experiment to them is about carefully scripted test cases that outline every step and the expected results of that test case. However, many times science doesn’t really work that way. A good deal of care is put into the variables in an experiment, but a lot of exploration also goes on. What is important is not necessarily the formula, but how to deal with unintended consequences. Scientific theory is often more about thinking, dealing with empirical data, and making inferences based on experiments.
Scientific theories go far beyond empirical data, and new experiments confirm and disconfirm theories all the time. Yesterday’s scientific truth becomes today’s scientific joke. “Can you believe that people once thought the world was flat?” As a software tester I’ve known “zero defect” project managers who thought the software was bug-free when it shipped. It wasn’t funny when they were proved wrong, but the software testers were treated like “round ballers” when they provided disconfirming information prior to release.
Good scientists deal with a lot of uncertainty. Good software testers need to be comfortable with uncertainty as well. Software systems are becoming so complicated, it is impossible to predict all the consequences of system interaction. Directed observation requires predictability and has a danger of not noticing the results that aren’t predictable.
Exploratory testing is a way of thinking about testing that can be modelled after the scientific method. It doesn’t need to be some ad-hoc, fly by the seat of your pants kind of testing that lacks discipline. Borrow a little thinking from the scientific community, and you can have very disciplined, adaptable, discovery-based testing that can reliably cope with unintended consequences.
Association for Software Testing
Belonging to a professional organization is a good way to be part of a community and learn some tricks of the trade. As a software tester, I’ve found it hard to find an organization that seriously dealt with the area of work I engage in. I felt I should belong to some sort of professional organization, so I finally picked one. I didn’t get a lot from it, so I let my membership lapse when I heard about the formation of the Association for Software Testing.
I’m impressed by the goals of this association. This is a community I can respect, identify with and learn from. It isn’t an organization that is peddling certifications or shilling flavor of the month methodologies or tools. This is a serious effort to bring together practitioners, experts, students and academics in the software testing field. If you are a software tester who is looking for a professional organization to join to really get something from, I’d recommend taking a look.
Tester as Editor Metaphor
Sometimes when I’m asked to explain how I see the role of testers on development teams, I use an (admittedly simplified) editor role as an analogy. For example, I might be asked to write an article on a particular topic, say test automation, and the publisher asks me to touch on three areas: a test strategy, a list of test tools and an example. These three requirements serve as the acceptance tests. I will write several drafts on my own, and with each draft I will remove spelling errors and grammatical mistakes. These are my unit tests. I move iteratively with article drafts until I feel I have expressed the ideas that are needed for the article. Once my unit tests have passed, and I’m confident I’ve have touched on the three areas I’ve been asked to write about (my acceptance tests seem to pass), I send the article to an editor for feedback.
The editor checks the content. If I haven’t met the obligation of the acceptance tests, they will let me know quickly that I missed a requirement. Good editors point out areas where I haven’t expressed an idea as clearly as I could. Good editors really know the audience, and point out areas that the audience may have problems with. They may also point out spelling mistakes and grammatical errors I have missed. They help draw out ideas and help me make the article all it could be given the time and resources available.
The editor doesn’t merely point out flaws, they can also provide suggestions to help overcome them. A cluster of writing errors may indicate to an author that an idea is malformed. Editors can spot problems that the author might miss because they are a skilled “second set of eyes”. They can provide constructive criticism and help encourage the author prior to publishing. It isn’t necessary for all articles to have a formal editor, but when I work with a good editor I realize how much better the article is than if I did it on my own.
In many ways a software program is also an expression of an idea. An idea may be technically correct and may meet the requirements of the customer, but may not be expressed clearly given the intended audience. My role of a tester is not an adversarial one, but like the editor my goal is to encourage and help make the program all it could be prior to its release. Like a good editor, a tester has an ability to know the audience of the program and the context in which it will operate.
In this post, Brian Marick talks about writer’s workshops which look like a good idea for doing software testing:
I’m drawn to the analogy of writers’ workshops as they are used in the patterns community. Considerable care is taken to prepare the author to be receptive to critical comments. At its worst, that leads to nicey-nice self-censorship of criticism. But at its best, it allows the author to really hear and reflect on critical ideas, freed of the need to be defensive, in a format where teaching what should change doesn’t overwhelm teaching what works and should stay the same.
Software development teams could learn a lot about constructive criticism from the good writer’s workshops.
In another post, Brian sums up the idea I’m trying to get across here:
… I offer this picture of an Agile team. They – programmers, testers, business experts – are in the business of protecting and nurturing the growing work until it’s ready to face the world.
Good testers are not the adversaries of developers, they are part of the team who works collectively towards creating the best software they can. Bad testing, like bad editing does not seem to be in the business of nurturing a growing work.
Ted Talks About the Customer
Ted O’Grady has an interesting post on the customer and their role on XP teams. I agree with what he has said.
It’s also important to think of the context that the software is being used in. I learned a lesson about getting users to test our software in our office vs. getting them to use the software in their own office. The Hawthorne Effect seemed to really kick in when they were on-site with us. When we observed them using the software in their own business context to solve real-world problems, they used it differently.
If the customer is out of their regular context, that may have an effect on their performance on an agile team. Especially if they feel intimidated by a team of techies who outnumber them. When they are in their own office, and their team outnumbers the techies, they might be much more candid with constructive criticism. Just having a customer on-site doing the work with the team doesn’t guarantee they will approve of the final product. Often I think there is a danger they will approve of what the team thinks the final product should be. When we had rotating customer representatives on a team, groupthink was greatly reduced.
Testers provide Feedback
A question that comes up very often is what activities can conventional testers engage in on agile projects? To me, the key to testing on any project is to provide feedback. On agile projects, the code should always be available to test, and iterations are often quite short. Therefore, a tester on an agile project should perform activities that provide rapid, relevant feedback to the developers and the business stakeholders.
A tester needs to provide feedback to developers to help them gain more confidence in their code, and (as James Bach has pointed out to me), feedback to the customer to help them gain more confidence in the product that is being delivered. So an agile tester should identify areas to enhance and complement the testing that is already being done on the project to that end.
There are potentially some activities that are more helpful than others, but providing feedback is central to what testers do. Some feedback may be in the form of bug reports, or it may be a confirmation that the code satisfies a story, or the application works in a business context.
Virtual Bugs
Mark McSweeny and I were talking about some of the challenges facing conventional testers on agile projects. One such challenge is what to do with bugs found during development, such as during Test-Driven Development when a tester and developer are pairing. It doesn’t seem fair to the developer to formally log bugs on a story before they have completed development on it. Many of them will be moot once the story is done, but some of them might squeak through. How do we keep track of them in a constructive way?
When they are pairing and developing tests, new test cases are added as they are generated, and the code is added to make them pass. However, sometimes when bugs are found during story development, the tester can overwhelm the developer and impede development progress. At this point, the developer and tester pair can split up, and the develoepr can pair with another developer and work towards completing the story. As a tester, what do I do with the bugs we discovered but couldn’t get the unit tests finished for?
On small teams, I will keep a running tab in my notes of these bugs. When the story is complete, I check my notes and test these scenarios first, and then log the ones that weren’t fixed during the story as “bug stories”. This is fine if there are a small number of developers, and I’m the only tester. This doesn’t scale well though. On slightly larger teams, I have also used a wiki to record these bugs which the other testers also reviewed and used. When they tested a story when it was complete, they would check the wiki first for any of these bugs. Any of these that weren’t addressed in development were then logged as bug stories or in a fault-tracking system. This can create classes of bugs which can create problems. It was hard to maintain the two systems, the wiki and the bug tracker.
As I was describing some of the problems I’ve come across with bugs found during story development, Mark cut through my usual verbosity with clarity, and said I was describing: “virtual bugs”. This is a lot more concise than my five minute hand waving explanation of this different class of bugs.
I have started calling the bugs found during story development “virtual bugs”. My question to other conventional testers on agile projects is: “How do you deal with virtual bugs”? Please share your experiences.
Watir Release
The Web Testing with Ruby group released version 1.0 of the Watir tool today. Check it out here: Get Watir
Download the latest release which is a zip file at the top of the watir package list. (At the time of this posting, the latest version is 1.0.2.) Open the User Guide for installation instructions, and check out the examples to see how you can use the tool.
You will need Ruby if you don’t have it installed. Version 1.8.2-14 Final is a good candidate to use, or 1.8.1-12 if you prefer to be more on the trailing edge.
For those who may not be familiar with the tool, it allows for testing web applications at the web browser level. The project utilizes testable interfaces in web browsers to facilitate test automation. Currently it only supports Internet Explorer, but plans are underway to support other browsers. It is also only available on Windows.
Green Bar in a Ruby IDE!
The Ruby Development Tool project team, who develop a plugin for Eclipse, have added some great features in their latest release. Most important (to me), is test::unit support, as well as some other features such as code-completion based on configurable templates, and a regular expression plugin.
I’ve been using RDT in Eclipse for a few months, and with this latest release, I’m very pleased to finally have a green bar in my Ruby development. Or more often in my case, a red bar.
Thanks to Glenn Parker for letting me know about this release.
Traits of Good Testers
People frequently ask me what to look for in potential candidates when hiring testers. Michael Hunter “The Braidy Tester” has a great post on The Hallmarks of a Great Tester. He has some good thoughts in this post. I recommend checking it out.
I would add the traits honesty and integrity, as well as someone who has courage to the list. Testers seem to end up in situations where ethical concerns may arise. A tester needs to know what their ethics are, and to stand for them. As Lesson 227 of Lessons Learned in Software Testing says:
You don’t have to lie. You don’t have to cover up problems. In fact, as a tester, the essence of your job is to expose problems and not cover them up. You don’t have to, and you never should compromise your integrity. Your power comes from having the freedom to say what’s so, to the people who need to hear it. If you compromise your reputation for integrity, you’ll weaken one of the core supports of your power.*
Discovering these traits in a potential candidate can be difficult, but there are effective ways of interviewing to tell if someone is a great tester or not. Check out Johanna Rothman’s Hiring Technical People blog for more information on hiring techies.
*Excerpted from p.200 of Lessons Learned in Software Testing.