Category Archives: exploratory testing

Exploratory Testing Article

Exploratory Testing: Finding the Music of Software Investigation is an article that explains exploratory testing using music as an analogy.

I first wrote this article over a year ago as an introductory article on exploratory testing. I wrote it with a technical audience who may never have heard of exploratory testing in mind. When one of my programmer friends exclaimed: “I get what exploratory testing is now!” I knew it was time to stop tweaking it. It has languished in my article drafts folder for months, so I decided to publish it myself.

I hope you enjoy it.

UPDATE: In the spirit of the article, and the work that I do, there are reports of the PDF not loading when using an older version of FireFox, with an older version of Adobe Reader. I tested it with Safari, Opera, FireFox and IE 7 with Adobe Reader 8. I get application errors on file load when using FireFox 1.5 and Adobe 6. If you have problems, use “File/Open” within the PDF reader FireFox plugin, which seems to work. That’s the workaround, I’m working on a fix. 🙂

Getting Started With Exploratory Testing – Part 4

Practice, Practice, Practice

To practice exploratory testing, you will find that you need to be able to generate test ideas, sometimes at a moment’s notice. The sky (or more accurately your mind) is the limit to the kinds of tests you can implement and practice with. The more you think up new tests, and learn to analyze software from many different angles, the more thorough your testing and the resulting discoveries will be. Like anything else, the more you practice, the better you will get.

Test Idea Strategies

Here are some test strategies you can use to generate test ideas to practice exploratory testing. Practice testing, but also practice thinking about testing, applying strategies and doing analysis work.

Concrete Strategies

Sometimes as a tester you need to grab and use specific test ideas off the shelf to get started quickly. Do you have a source of ideas for specific strategies tests sequences, or test inputs? Some test teams have quick reference lists with specific test ideas for their applications under test for this purpose. If you are practicing exploratory testing, and find you need some new ideas quickly, here are some places to look.
Michael Hunter has a fabulous list of test ideas in his You Are Not Done Yet blog series.

Elisabeth Hendrickson’s Test Heuristics Cheat Sheet describes “data type attacks & web tests”.

Karen Johnson has some concrete test ideas on her blog as well. Check out Testing by the Numbers, Chars, Strings and Injections, and ; SQL injections- -‘ for some ideas.

Concrete test ideas can be incredibly useful. Recently I decided to grab some of my favorite testing numbers off the shelf and input them through a test interface I had just developed (written in Java using a library which allowed me to write commands in JUnit tests.) I found a bug within seconds thanks to a testing interface that allowed me to use more speed and accuracy in my test inputs, and my off-the-shelf testing numbers.

Abstract Strategies

Abstract strategies are all about testing ideas. These require more thinking than a quick off the shelf test, but the benefits are enormous. Applying these kinds of strategies to testing can result in very thorough, creative, and disciplined test strategies and execution. You can use these strategies to help design tests in the moment, or while doing test analysis or test planning. Some examples include test models and heuristics.

James Bach’s Heuristic Risk-Based Test Strategy Model is a great place to start. In fact, James’ methodology page is full of great ideas for testers. I touched on the subject of a benefit of using models in this blog post Speeding Up Observation with Models.

Thanks to the work of James, there are several testing mnemonics you can use, or better yet, use as inspiration to create your own.

I touched on the subject of learning testing heuristics here.

Other models include coverage models, failure mode analysis, McLuhan thinking, and many others.

Language Strategies

This is another abstract strategy, but language is so important I decided to give it its own category. Examples of strategies are to look at the meaning of words, how they are put together, and how they might be interpreted in different ways. I use dictionaries a lot to help my thinking, and they can be a great tool to expose vague terms. I also use grammar rules when I investigate a product and look for potential problem areas. I engage in almost a type of “software hermeneutics.” The linguistics (study of language) field has a wealth of tools we can also use to help generate test ideas.

The requirements-based testing community have a good handle on testing written documentation. When I look at anything that is written down that has to do with the product, I carefully look at the wording. That includes in the software, in any supporting documentation, comments, tests, etc. and I look for potential ambiguities. For example, Bender RBT Inc, has a paper, the “The Ambiguity Review Process” which contains a section: “List of Words that Point to Potential Ambiguities“.

Once you find an ambiguity, can you interpret the usage of the software in different ways and test according to different definitions? This is often a surprising source of new test ideas. I use language analysis a lot as a tester, and it is a powerful method for me to generate test ideas.

Sensing and Feeling Strategies

Using Emotions

This is can include situations where our senses are telling us something that we aren’t immediately aware of. We feel a sense of discomfort, sometimes called “cognitive dissonance”. Things appear to be functioning, but we feel conflicted, sometimes for reasons that aren’t readily apparent . I’ve learned to use the “feeling that something is off” as a clue to start investigating that area much more heavily. It almost always pays off in the discovery of a good bug.

Michael Bolton describes this emotional side of testing very well:

…When I feel one of those emotional reactions, I pause and ask “Why do I feel this way?” Often, it’s because the application is violating some consistency heuristic–it’s not working as it used to; it’s not supporting our company’s image; there’s a claim, made in some document or meeting or conversation, that’s being violated; the product is behaving in a way that’s inconsistent with a comparable product; …

Update: Be sure to check out Michael’s excellent presentation on this topic here. It might change the way you think about testing.

Using Your Senses

Have you ever pushed a machine to its limits when testing and noticed something physically change in your test environment? This is also a trigger for me to push harder in that direction, because a discovery is on the way.
Here are some examples:

  • sluggish response when interacting with the UI, or an API
  • interaction slowing down
  • sense the machine labouring
  • hear the fan kick in hard when running a test
  • feel the device getting hot

An old rule of thumb when traveling is to use a map, but follow the terrain. Being aware of any strange feelings you have when testing (it feels off, let’s investigate this a bit more), and being aware of changes to the test environment are great ways to explore the system.

George Dinwiddie reminded me of a good rule of thumb:

As Bob Pease (an analog EE) used to say in Electronic Design and EDN magazines, ‘If you notice anything funny, record the amount of funny.’

As a tester, when I see something funny, my first instinct is to get the application to repeat the funny behavior. Then I observe whether follow-on testing creates even more funny behavior. In many cases it does, because bugs often tend to cluster.

Some of my most significant discoveries have come about because I paid attention to a weird feeling, investigated further, noticed something flicker out of the corner of my eye, or noticed that the log file on a server that I was tailing was writing at a strange rate, or that a certain action caused my machine to start laboring, etc. It’s often the subtle things that are the biggest clues to overlooked problems in the system, so it’s a good idea to practice your observation skills.

Oblique Strategies

These are ideas you can use to spur creative thought processes if you find yourself running out of ideas. Michael Bolton introduced me to these last year while we were working on a testing exercise. I was running out of ideas, and Michael sent me to this web site that generates oblique strategies. After a couple of tries, I ended up on a whole new idea set that jump-started my testing.

Wikipedia says this about Oblique Strategies: “…is a set of published cards created by Brian Eno and Peter Schmidt. Now in its fifth edition, it was first published in 1975. Each card contains a phrase or cryptic remark which can be used to break a deadlock or dilemma situation.”

When I was a kid, my family had a game where you would blindly open a dictionary, close your eyes, and put your finger down on a page. You would then open your eyes, look at the word closest to your finger, and then make up a short story involving that word. A variation involved doing this for two words, and then combining them to try to come up with something creative or funny. This is a similar kind of idea generating strategy to oblique strategies, but not as sophisticated. There are other, similar ways of getting oneself out of a rut.

With these sorts of tools you often need to be creative to apply your result to testing, but with a little bit of trial and error and creative thinking, you will be surprised at what you come up with.

Uncategorized, or Inexplicable Strategies

Sometimes, believe it or not, some of my friends dislike my need to systematize, categorize and neologize. They prefer to think that their testing ideas are the work of intuition, or are dropped to them by the testing muse, or the idea fairy. That’s fine, and I can respect that they think about things differently. Furthermore, I frequently miss out on ideas and insights that they have. I’ve added this section to include the mysterious, the unexplained, or any other test strategy I might not be aware of, or be hard-pressed to categorize.

What works for you? Do you use a strategy for testing that I haven’t touched on in this post? This list is in no way exhaustive, and it, like any other model, is imperfect. I’m sure my own model and strategies will change as I learn more, and get more feedback from colleagues, and you, the reader.

I hope this list of test strategies helps you as you practice exploratory testing. Test idea generation can be as creative as we permit ourselves to explore, and one of the wonderful things about software testing is that sometimes the weird idea combinations are the best ones. Furthermore, practice helps build the strength and agility of the most powerful testing tool we have at our disposal, the human mind.

Getting Started With Exploratory Testing – Part 3

Wean Yourself off Scripted Tests

Scripted tests, or test cases that are written in advance, contain procedural steps to execute, and usually have some sort of “expected results” at the end are very common. If you are a tester who is used to only using these kinds of test cases, transitioning to exploratory testing has special challenges.
One way to get started with exploratory testing is to change the way you implement scripted tests, with two goals in mind:

  1. to get a feeling for exploratory testing
  2. to be able to engage in testing without needing pre-recorded test scripts

Some testers can find it overwhelming and can almost feel a sense of panic when they consciously try out exploratory testing. Exploratory testing is actually a natural activity for us to do, but sometimes we are so conditioned to follow a script, and only test what we’ve been told to test, it can be a difficult transition to make at first. There are some useful things to remember:

  • If you feel overwhelmed by possibilities, stop, gather your thoughts, then pick one simple top priority item to test.
  • There is no right or wrong way of practicing exploratory testing; your own unique creative and idea generation methods are a source of strength.
  • Minimal planning can go a long way. To start, set out a time period (an hour might be a good place to start), and an area of focus in an application.
  • For your self-allotted time box of exploratory testing, try not to multitask. Stick to your area of focus, and resist the urge to check email, answer colleagues’ questions, or test other applications. If you find areas to explore outside of your area of focus, quickly take note of them, and return to them after your time-boxed testing session.

One way to do this is to encourage yourself to add more variability when exercising a test script. Elisabeth Hendrickson talks a lot about adding variability to testing in her work. One way to consciously add variability is to change the program inputs that are recommended in the test script you are following. You can change them slightly, or dramatically, feel free to experiment with different types, or use some of your “old favorite” inputs, write a script to generate test data, or use something like PerlClip.

Another way to consciously add variability into your test script is to vary the steps. Here is an analogy. A friend of mine is an accomplished musician with strong technical skills, but struggles with improvisation. Here is an exercise we did recently:

  1. I recommended that she take a simple music score (like Mary Had a Little Lamb), and try to play it note for note. I wanted her to play a rendition that she felt was as close to the way the author intended the sheet music to be interpreted.
  2. Next, I asked her to add ornamentation to her playing. (This is subtle variation, or a slightly unique style that comes out when playing an instrument, which tends to be unique from musician to musician.)
  3. Once she had done that, I asked her to play the song again, but to ignore some of the bars in the sheet music. I asked her to anticipate ending up on an important, signature part of the song so it was still recognizable, but encouraged her to do whatever came to mind to get there. It could be silence with counting and waiting; it could be one extended note; it could involve running up and down an appropriate scale; trying to play what they thought should be there from memory, or what was going on in her head.
  4. With that out of the way, I told her to throw caution to the wind and play with complete freedom. I asked her to play for roughly the same amount of time as the song might take, but to play whatever came to mind.

I asked her to record and send me her versions of the song at each step. The results were surprising and satisfying. In step 1, the song sounded as you would expect, but was a bit wooden, or forced, like a computer had played it. In step 2, the performance came alive. In step 3, there were flashes of brilliance and the odd spot where execution went off the rails. Part 4 was a huge surprise. My friend had been listening to progressive heavy metal earlier in the day, and the piece she came up with while improvising didn’t sound like Mary Had a Little Lamb at all. It sounded like part of a Dream Theater song. The notes were there, the scales were there, in fact, most of the framework of what Mary Had a Little Lamb is composed of was there, but it wasn’t recognizable.

However, it was incredibly effective as performed music. It was interesting, entertaining, creative, and it flowed with an effortlessness that resonates and connects with a listener. “Flow” is an important concept both for musical improvisation, and for exploratory testing.

With my musician friend, my goals were to get her to feel “flow” when playing her instrument, and to meld her thought processes with her physical manipulation of the instrument. Most importantly, I wanted her to let go of her inhibition when playing, to not worry about breaking the rules, and to just be free and have fun while achieving a goal. “Flow” is not only important in creative endeavors, it applies to any kind of human/computer interaction. This article is a good overview of the concept from a gaming perspective.

Do you have enough information to apply this kind of thinking to your testing work? If you’re still stuck, here are some things to consider:

Contrary to some perpetuated testing folklore, you do not need “Expected Results” to be written down prior to testing. James Bach says:

The expected result thing is interesting because we have a million expectations, mostly unconscious and emergent. To suggest that there are only one or two expected results is absurd. What I say is not that you need an expected result for a test. What you need is a reasonable oracle. This is a broader idea than expected result. By the way, your oracle is not necessarily needed before the test is executed. It can come at any time, even a week later.

Note: An “oracle” is defined as: “…a principle or mechanism by which we recognize a problem.” Also notice that in my example above, the “expected results” are not written down in the sheet music, they are implied expectations of the performer and the listeners. Not having them written down is no more an impediment to testing than it is not an impediment to performing music.

You do not need formal requirements, or a complete specification to test. As a software user and tester, you have what James Bach calls “reasonable expectations”. Michael Bolton has a nice article that addresses this issue: “Testing Without a Map“. You can use some of your scripted test cases as a place to start, but one goal for your exploratory testing should be doing something like what Michael describes here.

To start weaning yourself off of scripted tests, try to repeat what my musician friend did above, but with test cases you are familiar with. If you get stuck trying to transition away from a scripted test, here are some things to try.

  • Execute a test script, but stop part way though, and interact with the application at whatever point you are at, then come back to the test script. Does following the script feel different? Take note of that feeling, and try to replicate the feeling you get with exploratory testing more often in your testing execution, and when thinking about testing.
  • Stop part way through the script, and engage in another activity. Stare out the window, take a short walk, or read a testing article, and then come back to your test script. Instead of following the rest of the test script steps, focus on the first thing you see in the application, and interact with that component.
  • Memorize the test script, re-run it, and then get creative in your execution of it. Can you change it so much that it still meets the intent or goal of the script, but the original writer may not recognize it?
  • Try to be as observant as possible. Can you see anything different that you might not have noticed before? Stop and take notice of what the application is doing. Does anything feel different or suspicious to you? Is there a blink, or an oddity in the way a screen loads? If something feels funny, this is a good sign to investigate that thing that seems a bit off. Embrace those feelings when they come, that is often a sign that you are almost subconsciously observing something different.
  • Research other ways of interacting and implementing tests, and pre-plan a completely new strategy for testing that application. Can you use a different testing tool? Execute those tests through a different interface than the user interface? Once you have a new plan of attack, follow the intention of the script, but not the steps of the script using a different testing strategy or technique.
  • Focus on observing slight details when testing, especially ones you didn’t see before, and capitalize on exploring them. Practice developing your observation skills away from the computer, and apply your learnings to testing.
  • For something completely different, try testing James Lyndsay’s black box test machines. There’s nothing quite like testing something that has no written documentation at all to go on.

With time, you should find that you no longer need pre-scripted tests at all to begin testing. All you will need is software to test, and your testing ideas and thinking skills. If you can learn to alternate between both scripted and exploratory testing, your testing and your thinking about testing will change. I hope this will help testing become more interesting for you, particularly if you feel locked in scripted testing mode. I also hope that if you practice and think about testing, your confidence in your abilities as a tester will grow.

Getting Started with Exploratory Testing – Part 2

Apply Improvisation to Testing

How do I get started with exploratory testing? Here are is an idea to help you get started:

Study and practice improvisation, and apply your learnings to your own testing work.

Wikipedia describes improvisation as:

…the practice of acting and reacting, of making and creating, in the moment and in response to the stimulus of ones immediate environment. This can result in the invention of new thought patterns, new practices, new structures or symbols and/or new ways to act. This invention cycle occurs most effectively when the practitioner has a thorough intuitive or technical understanding of the necessary skills and concerns within the improvised domain.

I first consciously studied and practiced improvisation (improv) in middle school. In drama class, we were encouraged to study and practice improv to develop our acting skills. I enjoyed being in the moment, and working with whatever props, cues or events I had at the time to work with. I realized quite quickly that it is a lot harder than it looks; reading from a script and being directed on everything you do in a performance is very different. It’s easy to get over confident and then freeze in a performance if you are unprepared.

There are many skills that actors can develop to do improv well. Some of my friends practiced different techniques with their faces, posture, body language and voices so they could be even more versatile. Some could change the entire mood they wanted to convey with a few imperceptible changes in their demeanor. That requires practice and skill, but good actors are able to employ these kinds of tools almost effortlessly once they are mastered.

I then studied improvisation in music. After high school, I joined a band that needed a saxophonist. They liked to do extended jams, and it wasn’t easy to keep up at first. I was well practiced reading sheet music and had my parts all polished up, but I quickly fell off the rails when the band leader would lead improvised sessions. I’d miss a key change and do something dumb like playing something minor-dominant when we had changed to a major key, or get my rhythm mixed up because the drummer had changed timing while I was in the middle of a solo. After I practiced like mad, and learned a couple of patterns, I could cope a lot better with these free sessions, but I never had the chops to play in free jazz bands like some of my friends have.

Some people have said: “Oh, Jazz? Is that the music where they just make it up as they go along?” This is actually incredibly difficult to do. My friends who have played in jazz bands are technical monsters on their instruments. They have very strong knowledge of music theory, and can play their instruments with great precision and speed. I on the other hand, have a much more narrow grasp as a musician, but I can still do some improvisation. I could spend the rest of my life just trying to master guitar.

Like most things in life, improvisation can be done poorly, or it can appear effortless and mesmerizing when done by a skilled practitioner. It can be done effectively by people with various degrees of skill. I have musician friends who are musical and very strong technically on their instruments freeze up without sheet music. I also have musician friends who know very little music theory who can do very well at improvisation. I also have testing friends who have fabulous improvisation skills, and others who do it naturally and seemingly unconsciously, but effectively.

Improvisation is about being in the moment, and dealing with what you observe, think, feel and react to. As testers, we can be in the moment as we interactively test software, and we can be as free and unconstrained as other improvisers. To do it very well requires skill, but we can be effective and grow our testing skills if we learn and practice. We just need to be aware of the software we are testing, and learn to keenly observe what it is doing, and be able to react and create new tests as we interact with it. Like acting, music, or other improvisational activities, you will find you end up in interesting and rewarding situations.

*See Dictionary.com for definitions of improvise.

Getting Started with Exploratory Testing – Part 1

Apply Investigation to Testing

How do I get started with exploratory testing? Here is an idea to help you get started:

Research investigative disciplines, and apply your learnings to your own testing work.

Software testing is investigative. There are investigative disciplines all around us that we can learn from. The obvious ones are the ones with “investigator” in their title – detectives, fire investigators, journalists, and others. However, other disciplines are also investigative. I have a friend who is a pilot, and he constantly investigates the complexities around operating an aircraft while he is flying. I have a friend who is a nurse, who needs to constantly investigate symptoms and react in first-line response when dealing with patients at a hospital. I have a friend who is a doctor, and her work is investigative, whether it is diagnosing a patient with a cold, looking over test results or performing surgery. I have a friend who is a firefigher, and he uses investigative skills to secure a scene so the firefighters can safely do their work. He also does vehicle extrications at car crashes, and goes out to medical calls.

I ask them about their work, the tools they use, and listen to their stories. All of the people listed above use some similar thinking tools. For example, they all use mnemonics. Why? Each of the careers I mention above deal with mission critical situations, and they want to have a repeatable, thorough thinking process. Mnemonics are memory aids, and help investigators be consistent and thorough as they observe and think about problems they are facing. Mnemonics are also a type of model that help speed up observation. Exploratory testers use mnemonics as well. There are other models and tools that they use that are fascinating to learn more about.

I also talk to others who use investigative tools in their work when I can. I’ve talked to historians, auto mechanics, forensic accountants, and my dental hygenist. Another source of information for exploratory testers are books on investigative fields. Testing Consultant Rob Sabourin is a big Sherlock Holmes fan, and Mike Kelly recommends the book “Conned Again Watson” which deals with probability, statistics, logic, etc.

Michael Bolton recently recommended the book: “How Doctors Think”, which is another reference for studying an investigative discipline. Ben Simo has a great write up of his impressions of this book on his blog. When I talk to my health care worker friends, I notice a lot of similarities to my work, and I learn something to apply to my testing thinking.

Once you have some new ideas, practice some of them during your daily testing activities. You will probably notice that you start thinking about testing differently, the more you apply investigative techniques to your own work.

*See Dictionary.com for definitions of investigate.

Test-Driven Development and Exploratory Testing

I’ve spent time working on teams practicing Test-Driven Development. Since I am a tester, and the practice has the word “test” in it, I’d only been on Agile teams for a short time before I was enlisted by programmers to help them out with test idea generation during Test-Driven Development.

At first, it felt a bit foreign, particularly when the programmers were using tools and programming languages I wasn’t that familiar with. In fact, it didn’t really feel like testing at all; it seemed like a programming style that was heavily influenced by object-oriented programming techniques. Once in a while though, it really felt like testing, the kind of testing I was familiar with. The programmers would turn to me to get me to drive when pairing, and were acutely interested in test idea generation. This was where using exploratory testing techniques, particularly rapid testing techniques really shone. Since we were developing new features quickly, the faster I could generate useful and diverse testing ideas, the quicker we could generate tests, strengthen the code, and improve the design. When we went back to developing a new feature though, I again felt out of place, particularly when I distracted and frustrated development at first.

I decided that the only way to understand Test-Driven Development was to immerse myself in the practice, so I paired with an expert programmer who had experience teaching TDD. I describe some of my experience in this article: Test-Driven Development from a Conventional Software Testing Perspective, Part 1. I found that I was still struggling with parts of TDD, and I felt like I would contribute very little at some points, and contribute a great deal at others. William Wake cleared this up for me. He explained two phases of Test-Driven Development: the generative phase and elaborative phase.

Generative tests are the first tests that are written, primarily about design. They’re “shallow sweep” tests that yield the initial design. Elaborative tests are deeper, more detailed tests that play out the design and variations.

Transitions between these phases can occur at any time, which is why I hadn’t noticed the change between my testing idea generation providing more or less value. At some points I would actually distract the programmer with too many test ideas, and at others, they would sponge up everything I could generate and would want more. I had to learn to watch for patterns in the code, for patterns in our development, and to work on my timing for test idea generation.

When we were clearly in the elaborative phase, our testing was much more investigative. When we were in the generative phase, it was much more confirmation-based. In the elaborative phase, we were less concerned with following TDD practice and generating code than we were with generating test ideas and seeing if the tools could support those tests or not.

To understand more, I decided to do TDD myself on a test toolsmith project. I describe those experiences in more detail here: Test-Driven Development from a Conventional Software Testing Perspective, Part 2. This experience gave me more insight into the testing side of TDD.

One day while developing in the generative phase, I was suddenly reminded of pre-scripting test cases, a practice I did religiously in the beginning of my career before fully embracing investigative exploratory testing. Instead of recording tests in a document or in a test case management system, I was pre-scripting them in code. Instead of waiting for a build from the development team to run the scripts, I was running them almost immediately after conceiving them and recording them. I was also recording them one at a time and growing a regression test suite at the same time as creating scripts. I realized that an assertion in my tests was analogous to the “Expected Results” section of a test script, an idea popularized by Glenford Myers in his book “The Art of Software Testing”, published in 1979.

When I moved to the elaborative phase and used different kinds of testing, glaring weaknesses in my test-driven code became apparent. In the elaborative phase, I didn’t care about expected results nearly as much as the unexpected results. The testing here was investigative, and if I pulled off my programmer hat long enough, I could go quite deeply with exploratory testing techniques. When a weakness was exposed, that meant I had missed test(s) in the generative phase, and I would add new test(s) to cover those missed cases to my unit testing code.

The main difference between what I was doing in code and what I would normally do with a program was the interface I was using for testing (I was using code-level interfaces, not the GUI), and I was the one fixing the code whenever I found a problem. Since the tests were highly coupled, and expressed in the same language as the program code, test case execution and maintenance wasn’t as disconnected.

I was also less concerned with my test case design up-front in the elaborative phase. It was much more important to use the programming tools to allow me create different kinds of test cases quickly, or on the fly. Once I found something interesting, and decided to record a test as a regression test, I would convert the test into a proper unit test with setup method, a test method with an assertion, and a teardown method.

Now that I’ve had some experience myself with TDD, and have worked with skilled practitioners, I have a better idea of what it is, and how it can relate to exploratory testing and scripted testing. There are a lot of elements of pre-scripting test cases in the generative phase, or scripted testing, including some of the drawbacks of that practice. Over several releases, pre-scripted test cases tend to have an increasing maintenance cost. (Note, Story-Test Driven Development is another form of pre-scripted, confirmatory testing which can exacerbate the maintenance problems we had with pre-scripted tests in pre-Agile days. They can be extremely maintenance intensive, particularly over long projects, or over multiple releases, robbing resources from new feature development and investigative testing activities.)

The elaborative phase was a natural fit for exploratory testing, and a great place for non-technical testers to jump in and help generate testing ideas. The challenge with this is figuring out what is good enough, or when to stop. Investigative testing is only limited by the imagination of the people involved in the activity. When there are two of you, a tester and a developer, it is easy to quickly generate a lot of great ideas.

It is also common for TDD practitioners to skip the elaborative phase for the most part. In this case, TDD is almost exclusively about design, with little testing going on. The “tests” are mostly a type of requirement, or examples. I’ve also seen some programmers abandon elaborative tests because they failed, while the generative tests passed. Others just didn’t do much elaboration at all, partly because they were so focused on velocity that they wanted to get a story written as quickly as possible.
In other cases, the phases were more blurred, particularly with programmers who felt that testability improved their design, and were open to a lot of different kinds of testing techniques. In fact, because the feedback loop with pre-scripted generative tests is so tight, a programmer who is experiencing flow may be utilizing exploratory thinking while developing new features.

I talk more about some of my opinions on TDD from a tester’s perspective here: Test-Driven Development from a Conventional Software Testing Perspective, Part 3. This reflective article spawned some interesting ideas, and while recording the skepticism and cautionary points, I had a lot of ideas for new areas of work and research.

Several TDD practitioners have approached me about exploring the elaborative side of TDD with skilled exploratory testing to address some of the problems they were having with their code. One area I consistently hear about are tests that yield a brittle design. Once the code is put into a production-like environment, testing revealed a lot of errors they felt would have been better addressed during development. The programmers felt that with more exploratory testing with more elaboration would combat this. Using more elaborative tests, more often, might be one way to maintain exploratory thinking in an emerging new design. Instead of a narrow focus of “write a test, run it, when it fails, write code so it passes”, they felt that exploratory testing would expand their exploratory thinking while programming, with a better overall design as the desired result. Rapidly transitioning from generative design thinking to elaborative, exploratory testing is one area to explore.

This intersection of investigative testing and programming holds a lot of promise for practice and research. I encourage interested testers and programmers to work on this together. I’ve certainly learned a tremendous amount by working in this space, and the testing community has benefited from the subsequent improvement in practices and tools. In fact, recently as I was rapidly writing web services tests in Java/JUnit, I recalled how much work it was eight years ago for a testing department to create a custom test harness in Java before we could even write tests.

Five Dimensions of Exploratory Testing

(Edit: was Four Dimensions of Exploratory Testing. Reader feedback has shown that I missed a dimension.)

I’ve been working on analyzing what I do when Exploratory Testing. I’ve found that describing what I actually do when testing can be difficult, but I’m doing my best to attempt to describe what I do when solving problems. One area that I’ve been particularly interested in lately is intermittent failures. These often get relegated to bug purgatory by being marked as “Non-reproducible” or “unrepeatable bugs”. I’m amazed at how quickly we throw up our hands and “monitor” the bugs, allowing them to languish sometimes for years. In the meantime, a customer somewhere is howling, or quietly moving on to a competitor’s product because they are tired of dealing with it. If only “one or two” customers complain, I know that means there are many others who are not speaking up and possibly quietly moving to a competitor’s product.

So what do I do when Exploratory Testing when I track down an intermittent defect? I started out trying to describe what I do with an article for Better Software called Repeating the Unrepeatable Bug, and I’ve been mulling over concepts and ideas. Fortunately, James Bach has just blogged about How to Investigate Intermittent Problems which is an excellent, thorough post describing ideas on how to make an intermittent bug a bug that can be repeated regularly.

When I do Exploratory Testing, it is frequently to track down problems after the fact. When I test software, I look at risk, I draw from my own experience testing certain kinds of technology solutions, and I use the software. As I use the software, I inevitably discover bugs, or potential problem areas, and often follow hunches. I constantly go through a conjecture and refutation loop where I almost subconciously posit an idea about an aspect of the software, and design a test to decide whether my conjecture is falsifiable or not. (See the work of Karl Popper for more on conjectures and refutations and falsifiability.) It seems that I do this so much, I rarely think about it. Other times, I very consciously follow the scientific method, and design an experiment with controlled variables, a manipulated variable, and observe the responding variables.

When I spot an intermittent bug, I begin to build a theory about it. My initial theory is usually wrong, but I keep gathering data, altering it, and following the conjecture/refutation model. I draw information from others as I gather information and build the theory. I run the theory by experts in particular areas of the software or system to get more insight.

When I do Exploratory Testing to track down intermittent failures, these are five dimensions that I consider:

  • Product
  • Environment
  • Patterns
  • People
  • Tools & Techniques

Product

This means having the right build, installed and configured properly. This is usually a controlled variable. This must be right, as a failure may occur at a different rate depending on the build. I record what builds I have been using, and the frequency of the failure on a particular build.

Environment

Taking the environment into account is a big deal. Installing the same build on slightly different environments can have an impact on how the software responds. This is another controlled variable that can be a challenge to maintain, especially if the test environment is used by a lot of people. Failures can manifest themselves differently depending on the context where they are found. For example, if one test machine has less memory than another, it might exacerbate the underlying problem. Sometimes knowing this information is helpful for tracking it down, so I don’t hesitate to change environments if an intermittent problem occurs more frequently in one than another, using the environment a manipulated variable.

Patterns

When we start learning to track down bugs when we find them in a product, we learn to repeat exactly what we were doing prior to the bug occurring. We repeat the steps, repeat the failure, and then weed out extraneous information to have a concise bug report. With intermittent bugs, these details may not be important. In many cases I’ve seen defect reports for the same bug logged as several separate bugs in a defect database. Some of them have gone back for two or three years. We seldom look for patterns, instead, we focus on actions. With intermittent bugs, it is important to weed out the details, and apply an underlying pattern to the emerging theory.

For example, if a web app is crashing at a certain point, and we see SQL or database connection information in a failure log, a conjecture might be: “Could it be a database synchronization issue?” Through collaboration with others, and using tools, I could find information on where else in the application the same kind of call to a database is made, and test each scenario that makes the same kind of call to try to refute that conjecture. Note that this conjecture is based on the information we have available at the time, and is drawn from inference. It isn’t blind guesswork. The conjecture can be based on inference to the best explanation of what we are observing, or “abductive inference”.

A pattern will emerge over time as this is repeated, and more information is drawn in from outside sources. That conjecture might be false, so I adjust and retest and record the resulting information. Once a pattern is found, the details can be filled in once the bug is repeatable. This is difficult to do, and requires patience and introspection as well as collaboration with others. This introspection is something I call “after the fact pattern analysis”. How do I figure out what was going on in the application when the bug occured, and how do I find a pattern to explain what happened? This emerges over time, and may change directions as more information is gathered from various sources. In some cases, my original hunch was right, but getting a repeatable case involved investigating the other possibilities and ruling them out. Aspects from each of these experiments shed new light on an emerging pattern. In other cases, a pattern was discovered by a process of elimination where I moved from one wrong theory to the next in a similar fashion.

The different patterns that I apply are the manipulated variables in the experiment, and the resulting behavior is the responding variable. Once I can repeat the responding variable on command, it is time to focus on the details and work with a developer on getting a fix.

Update:
Patterns are probably the most important dimension, and reader feedback shows I didn’t go into enough detail in this section. I’ll work on the patterns dimension and explain it more in another post.

People

When we focus on technical details, we frequently forget about people. I’ve posted before about creating a user profile, and creating a model of the user’s environment. James Bach pointed me to the work of John Musa who has done work in software reliability engineering. The combination of the user’s profile and their environment I was describing is called an “operational profile”.

I also rely heavily on collaboration when working on intermittent bugs. Many of these problems would have been impossible for me to figure out without the help and opinions of other testers, developers, operations people, technical writers, customers, etc. I recently described this process of drawing in information at the right time from different specialists to some executives. They commented that it reminded them of medical work done on a patient. One person doesn’t do it all, and certain health problems can only be diagnosed with the right combination of information from specialists applied at just the right time. I like the analogy.

Tools & Techniques

When Exploratory Testing, I am not only manually testing, but also use whatever tools and techniques help me build a model to describe the problem I’m trying to solve. Information from automated tests, log analyzers, looking at the source code, the system details, anything might be relevant to help me build a model on what might be causing the defect. As James Bach and Cem Kaner say, ET isn’t a technique, it’s a way of thinking about testing. Exploratory Testers use diverse techniques to help gather information and test out theories.

I refer to using many automated or diagnostic testing tools to a term I got from Cem Kaner: “Computer Assisted Testing.” Automated test results might provide me with information, while other automated tests might help me repeat an intermittent defect more frequently than manual testing alone. I sometimes automate certain features in an application that I run while I do manual work as well which I’ve found to be a powerful combination for repeating certain kinds of intermittent problems. I prefer the term Computer Assisted Testing over “automated tests” because it doesn’t imply that the computer takes the place of a human. Automated tests still require a human brain behind them and to analyze their results. They are a tool, not a replacement for human thinking and testing.

Next time you see a bug get assigned to an “unrepeatable” state, review James’ post. Be patient, and don’t be afraid to stand up to adversity to get to the cause. Together we can wipe out the term “unrepeatable bug”.

Exploratory Testing using Personas

I get asked a lot about Exploratory Testing on agile projects. In my position paper for the Canadian Agile Network Workshop last month, I described Exploratory Testing using personas. I’m reposting it here to share some of my own experience Exploratory Testing on agile projects.

Usability Testing: Exploratory Testing using Personas

Usability tests are almost impossible to automate. Part of the reason might be that usability is a very subjective thing. Another reason is that automated tests do not run within a business context. Program usability can be difficult to test at all, but working regularly with many end users can help. We usually don’t have the luxury of many users testing full-time on projects; usually there is one customer representative. Once we get information from our test users, how do we continue testing when they aren’t around? One possible method is Exploratory Testing with personas.

I’ve noticed a pattern on some agile teams. At first the customer and those testing have some usability concerns. After a while, the usability issues seem to go away. Is this because usability has improved, or has the team become too close to the project to test objectively? On one project, the sponsor rotated the customer representative due to scheduling issues. We were concerned at first, but a benefit emerged. Whenever a new customer was brought into the team, they found usability issues when they started testing. Often, the usability concerns were the same as what had been brought up by the testers and the customer earlier on, but had been contentious issues that the team wasn’t able to resolve.

On another project using XP and Scrum, a usability consultant was brought in. They did some prototyping and brought in a group of end users to try out their ideas. Any areas the users struggled with were addressed in the prototypes. The users were also asked a variety of questions about how they used the software, and their level of computer skills, which we used to create user profiles or personas. As the developers added more functionality in each iteration, testers simulated the absent end users by Exploratory Testing with personas to more effectively test the application for usability. The team wanted to automate these tests, but could not.

Exploratory Testing was much more effective at providing rapid feedback because it relies on skilled, brain-engaged testing within a context. The personas helped provide knowledge of the business context, and the way end-users interacted with the program in their absence. The customer representative working on the team also took part in these tests.

Tension on usability issues seemed to be reduced as well. These issues were no longer mere opinions. Now the team had something quantifiable to back up usability concerns. Instead of having differing opinions from developers, testers could say: “when testing with the persona ‘Mary’, we found this issue.” This proved to be effective at reducing usability debates. The team compromised with most issues being addressed, and others not. There were still three contentious issues that were outstanding when the project had completed the UI changes. We scheduled time to revisit end-users and had some surprising results.

Each end-user struggled with the three contentious usability issues the testers had discovered, which justified the approach, but there were three more areas we had completely missed. We realized that the users were using the software in a way we hadn’t intended. There was a flaw in our data gathering. Our first sample of users tested in our office, not their own. We had them work with the software at their own desks, and within their business context. Lesson learned: get the customer data when they are using the software in their own work environment.

On this project, Exploratory Testing with personas proved to be an effective way to compensate for limited full-time end user testing on the project. It also helped to provide rapid feedback in an area that automated tests couldn’t address. It didn’t replace the customer input, but worked well as a complementary testing technique with automation and customer acceptance testing. It helped to retain their voice in the usability of the product throughout development instead of sporadically, and helped to combat group think.

Tim’s Comments on Software Testing and Scientific Research

Tim Van Tongeren commented on one of my recent posts, building on my thoughts on software testing and the philosophy of science. I like the correlation he made between more scripted testing and exploratory testing to quantitative vs. qualitative scientific research. When testing, what do we value more on a project? Tim says that this depends on project priorities.

He recently expanded more on this topic, and talks about similarities in qualitative research and exploratory testing.

Tim researches and writes about a discipline that can teach us a lot about software testing: the scientific process.