All posts by jonathankohl

Software Development Process Fusion – Part 1

I first brought this idea up publicly last year at the Agile Vancouver conference with a promise that I would share more of my thoughts. What follows is an attempt to fulfill that promise. This has turned out to be rather long, so it will appear as a blog series.

I grew up in an environment with a lot of music. My grandfather had a rare mastery over a wide variety of musical instruments, and family gatherings were full of singing and impromptu jams. At home, my father had a very eclectic taste in music, and I had a steady diet of gospel, classical, big band, bluegrass, traditional German and Celtic music. One of my babysitters had spent most of her life in India, and introduced me to all kinds of wonderful forms of Indian classical music when I was very small. I was exposed to popular music on the radio, and I took part in various music groups in school bands, choirs and at church. By the time I was in high school, I had a wide exposure to many different kinds of music, and enjoyed any sort of music that moved me, no matter what the style. I could enjoy a common thread in music that was composed and performed in a way that appealed to me, even if the styles were very different. In some cases, my classical friends couldn’t stand some of the popular music I enjoyed, and some of my gospel music friends would refuse to listen to secular music. Enjoying a wide variety of music styles could be controversial, depending on who I was talking to.

In the late 80s, I was in a choir that was in a competition in Toronto. I was billeted with a family who introduced me to a Canadian band called “Manteca.” My new friends introduced me to a style of music called “fusion,” and Manteca were well-known for their mastery of that style. There were elements of improvisational jazz, popular music and world music in their work. Because of Manteca, I decided to learn more about the history of this style. It didn’t take long before I discovered Miles Davis recordings from the ’60s and ’70s that pioneered a combination of musical styles. I then checked out work by Larry Coryell, and jazzy popular bands like Chicago and Blood Sweat & Tears. From Miles Davis, I followed some of his former band members works such as John McLaughlin, Herbie Hancock, Chick Corea and Joe Zawinul. I also checked out groups like the Crusaders, Weather Report and anything with bassist Jaco Pastorius. Some of the music was highly experimental, sometimes it was hard to listen to. One of my favorite bands in that style was Mahavishnu Orchestra, founded by wizard guitarist John McLaughlin. McLaughlin also founded a band called Shakti that utilized a different style of fusion. Shakti was a highly improvisational band utilizing master musicians from India, and McLaughlin on guitar.

From this musical journey, I discovered progressive music from the ’70s, with bands like Yes, King Crimson, Emerson Lake and Palmer, and Genesis. These were all groups with highly talented musicians who brought other musical styles into popular music forms. As with fusion, this style of music was highly experimental – some of it made popular charts, and others remained obscure. What these styles of music share is a demanding level of skill for the performers, an experimental, pushing the envelope attitude, often utilizing collective improvisation (particularly in live concerts.) They were also controversial when they first came out, but many “wild in their time” elements have become enmeshed in mainstream music today. However, in the pioneering days of fusion, it was not uncommon for critics to pan albums and for purists to cry fowl.

One musician who has had an enormous influence on popular music is former Genesis front-man Peter Gabriel. By the late 1990s, the style of music that Peter Gabriel made accessible to a huge audience in the 1980s emanated from airwaves and stereos everywhere you went. I heard Jesse Cook talk (a guitarist who fuses flamenco guitar with many different styles) about the impact Peter Gabriel has had on modern music, particularly with his ability to fuse popular music with traditional music from other parts of the world (“world music”.) Jesse Cook mused at the time that anything we heard on the radio could probably be traced back to Gabriel. When we were discussing the different styles that had an impact on everyday popular music, we wondered what it was like for the musical pioneers when their ideas were new, and how little most of us know about the history of music we take for granted.

I still listen to music that combines different styles. One of my recent discoveries is Harry Manx, a blues guitarist who plays slide guitar on a Mohan Vina, an Indian slide guitar. He deftly fuses traditional Indian music with traditional blues. I find that I am moved by too many styles of music to merely just choose one, and sometimes the weird combinations do something for me that just one style on its own can’t do. I still love classical music, and find that a particular period or style of music suits my mood. Music can touch us in ways other things can’t. Music also evolves, and musicians draw on many influences – yesterday’s “pure” style becomes influenced by something else, and we co-opt other ideas and change. I suppose we expect that in the arts. For example, the Canadian artist Emily Carr’s work is called “post-impressionist” because she came after the famous impressionist painters, and developed a unique style that doesn’t quite fit in that category. Like the fusion musicians, Carr’s artistry has many influences and changed a good deal over her life. Carr had a special ability to fuse disparate themes together in a painting. She might combine everyday objects we might see in our homes with a nature scene, or combine two different scenes together.

In software development, we don’t have a long and rich history to draw from like our artistic counterparts. That doesn’t stop me from approaching software development from the same angle as I do anything else though. Brian Marick in his “Process and Personality” article says: “…my methodological prescriptions and approach match my personality.” This is also true with me. I like to fuse different ideas together and see if I can create something new from the combination. There may be stark lines drawn between the fields where the ideas come from, but that doesn’t bother me too much. It gets me into trouble sometimes, but the ideas are what are important to me. When it comes to software development, I don’t really care if an idea is “Agile”, “waterfall” or has no label at all. If it’s a good idea to me, it’s a good idea. Sometimes on software development projects, I weave together combinations of these ideas in a way that may seem strange to some. I’ve started calling this style that I and others are exploring: “software development process fusion.”

Exploratory Testing: More than Superficial Bug Hunting

Sometimes people define exploratory testing (ET) quite narrowly, such as only going on a short-term bug hunt in a finished application. I don’t define ET that narrowly, in fact, I do ET during development whether I have a user interface to use or not. I often ask for and get some sort of testing interface that I can use to design and execute tests around before a UI appears. I’ll also execute traditional black-box testing through a UI as a product is being developed, and at the end, when development feels it is “code complete”. I’m not alone. Cem Kaner mentioned this on the context-driven testing mailing list, which prompted this blog post.
Cem wrote:

To people like James and Jon Bach and me and Scott Barber and Mike Kelly and Jon Kohl, I think the idea is that if you want useful exploratory testing that goes beyond the superficial bugs and the ones that show up in the routine quicktests (James Whittaker’s attacks are examples of quicktests), then you want the tester to spend time finding out more about the product than its surface and thinking about how to fruitfully set up complex tests. The most effective exploratory testing that I have ever seen was done by David Farmer at WordStar. He spent up to a week thinking about, researching, and creating a single test-which then found a single showstopper bug. On this project, David developed exploratory scenario tests for a database application for several months, finding critical problems that no one else on the project had a clue how to find.

In many cases, when I am working on a software development project, a good deal of analysis and planning go into my exploratory testing efforts. The strategies I outline for exploratory testing reflect this. Not only can they be used as thinking strategies in the moment, at the keyboard, testing software, but they can guide my preparation work prior to exploratory testing sessions. Sometimes, I put in a considerable amount of thought and effort modeling the system, identifying potential risk areas and designing tests that yield useful results.

In one case, a strange production bug occurred in an enterprise data aggregation system every few weeks. It would last for several days, and then disappear. I spent several days researching the problem, and learned that the testing team had only load tested the application through the GUI, and the real levels of load occurred through the various aggregation points communicating in. I had a hunch that there were several factors at work here and it took time to analyze them. It took several more days working with a customer support representative who had worked on the system for years before I had enough information to work with the rest of the team on test design. We needed to simulate not only the load on the system, but the amount of data that might be processed and stored over a period of weeks. I spent time with the lead developer, and the lead system administrator to create a home-grown load generation simulation tool we could run indefinitely to simulate production events and the related network infrastructure.

While the lead developer was programming the custom tool and the system administrator was finding old equipment to set up a testing environment, I created test scenarios against the well-defined, public Web Services API, and used a web browser library that I could run in a loop to help generate more light load.

Once we had completed all of these tasks, started the new simulation system, and waited for had the data and traffic statistics to be at the level that I wanted to generate, I began testing. After executing our first exploratory test, the system fell over, and it took several days for the programmers to create a fix. During this time, I did more analysis and we tweaked our simulation environment. I repeated this with the help of my team for several weeks, and we found close to a dozen show-stopping bugs. When we were finished, we had an enhanced, reusable simulation environment we could use for all sorts of exploratory testing. We also figured out how to generate the required load in hours rather than days with our home-grown tools.

I also did this kind of thing with an embedded device that was under development. I asked the lead programmer to add a testable interface into the new device he was creating firmware for, so he added a telnet library for me. I used a Java library to connect to the device using telnet, copied all the machine commands out of the API spec, and wrapped them in JUnit tests in a loop. I then created code to allow for testing interactively, against the API. The first time I ran a test with a string of commands in succession in the IDE, the device failed because it was writing to the input, and reading from the output. This caused the programmer to scratch his head, chuckle, and say: “so that’s how to repeat that behavior…”

It took several design sessions with the programmer, and a couple days of my time to be able to set up an environment to do exploratory testing against a non-GUI interface using Eclipse, a custom Java class, and JUnit. Once that was completed, the other testers used it interactively within Eclipse as well. We also used a simulator that a test toolsmith had created for us to great effect, and were able to do tests we just couldn’t do manually.

We also spent about a week creating test data that we piped in from real-live scenarios (which was a lot of effort to create as well, but well worth it.) We learned a good deal from the test data creation about the domain the device would work in.

Recently, I had a similar experience – I was working with a programmer who was porting a system to a Java Enterprise Edition stack and adding a messaging service (JMS.) I had been advocating testability (visibility and control – thanks James) in the design meetings I had with the programmer. As a result, he decided to use a topic reader on JMS instead of a queue so that we can see what is going on more easily, and added support for the testers to be able to see what the Object-Relational Mapping tool (JPA) is automatically generating map and SQL-wise at run-time. (By default, all you see is the connection information and annotations in Java code, which doesn’t help much when there is a problem.)

He also created a special testing interface for us, and provided me with a simple URL that passes arguments to begin exercising it. For my first test, I used JMeter to send messages to it asynchronously, and the system crashed. This API was so far below the UI, it would be difficult to do much more than scratch the surface of the system if you only tested through the GUI. With this testable interface, I could use several testing tools as simulators to help drive my and other tester’s ET sessions. Without the preparation through design sessions, we’d be trying to test this through the UI, and wouldn’t have near the power or flexibility in our testing.

Some people complain that exploratory testing only seems to focus on the user interface. That isn’t the case. In some of my roles, early in my career, I was designated the “back end” tester because I had basic programming and design skills. The less technical testers who had more knowledge of the business tested through the UI. I had to get creative to ask for and use testable interfaces for ET. I found a place in the middle that facilitated integration tests , while simulating a production environment, which was much faster than trying to do all the testing through the UI.

I often end up working with programmers to get some combination of tools to simulate the kinds of conditions I’m thinking about for exploratory testing sessions, with the added benefit of hitting some sort of component in isolation. These testing APIs allow me to do integration tests in a production-like environment, which complements the unit testing the programmers are doing, and the GUI-level testing the black box testers are doing. In most cases, the programmers also adopt the tools and use them to stress their components in isolation, or as I often like to use them for, to quickly generate test data through a non-user interface while still exercising the path the data will follow in production. This is a great way to smoke test minor database changes, or database driver or other related tool upgrades. Testing something like this through the UI alone can take forever, and many of the problems that are obvious at the API level are seemingly intermittent through the UI.

Exploratory testing is not limited to quick, superficial bug hunts. The learning, analyzing, executing, test idea generation and execution are parallel activities, but sometimes we need to focus harder in the learning and analyzing areas. I frequently spend time with programmers helping them design testable interfaces to help with exploratory testing at a layer behind the GUI. This takes preparation work including analysis and design, and testing of the interface itself, which all help feed into my learning about the system and into the test ideas I may generate. I don’t do all of my test idea generation on the fly, in front of the keyboard.

In other cases, I have tested software that was developed for very specialized use. In one case, the software was developed by scientists to be used by scientists. It took months to learn how to do the most basic things the software supported. I found some bugs in that period of learning, but I was able to find much more important bugs after I had a basic grasp of the fundamentals of the domain the software operated in. Jared Quinert has also had this kind of experience: “I’ve had systems where it took 6 months of learning before I could do ‘real’ testing.”

Update: New Article Published: Man and Machine

My first cover article for Better Software magazine is out this month: Man and Machine: Combining the Power of the Human Mind with Automation. This is an article describing how I bridge the gap between automated software testing and manual software testing. I rarely see software testing as purely scripted vs. purely exploratory, or purely manual vs. purely automated. I see these as a matter of degrees. I might think: “To what degree is this test automated?” or “To what degree will automation help us create a simulation to enhance our exploratory testing work?”

This isn’t a new concept, and I was influenced by Cem’s Architectures of Test Automation , and James’ Tool-Supported Testing ideas. Aaron West just sent me this link that describes a related concept at Toyota: the “autonomation” which is a machine that is used to help employees in their work, opposed to a fully automated robot.

I hope this article helps bridge a divide that doesn’t need to be there. Automation and manual testing are both important and can co-exist for the benefit of our testing efforts.

Update – Feedback:
Software developer Rob Shaw says:

Good stuff. I hadn’t thought of automation in this way. As a programmer with previous unit testing experience, I was definitely more in the ‘unsupervised automation camp’ but your transportation analogy shows the power of IAT (interactive automated testing).

A Software Religion Strikes Again?

Jim Coplien has an interesting blog post: Religion’s Newfound Restraint on Progress. If you are interested in “Agile Testing” and Test-Driven Development, this post is worth a read.
Coplien raises some important ideas:

The answer for industry, I think, is to focus on Quality and to find the techniques that work best and are the most cost-effective for your project.

It’s about thinking, not about checklists.

While we might diverge on ideas for solutions, I agree with Coplien’s message here. We need to have broad objectives and goals, and use the effective tools and processes at hand to reach those goals, rather than get lost in silly either/or debates about adopting tools and processes because they are an unalloyed good. We have enough folklore in software development without creating new myths and “best practices.” We need to share success stories and failures, instead of reinforcing ideals and discouraging thoughtful criticism born of experience. Being honest and realistic helps push the craft forward. Being overly idealistic pushes us towards religiosity.

We don’t all have to agree, but we need more of the kind of writing that James is doing here. It’s not so much that he is correct or incorrect, but that he has the guts to speak publicly about his misgivings and ideas. I don’t have to agree with what he says, but if he makes me think about what I’m doing, and challenges my beliefs, that’s a good thing. Sometimes in the face of a challenge and evidence, I change my mind, other times, I have to clarify my own thinking about a belief, which strengthens it. Support your local skeptic, and discourage those who scoff at and bully them.

Update

My blog has been quiet lately, and several of you have asked me what I’ve been up to. Since I last posted I’ve:

  • Created an all new Exploratory Testing course from scratch, and presented it for a corporate client, with more on the way.
  • Created a condensed version of the course for STARWEST that I’ll be presenting as a tutorial.
  • Written an article on Interactive Automated Testing for Better Software magazine (watch for it in the December issue.)
  • Presented a short Exploratory Testing Explained talk for two Agile user groups. One in Edmonton, and another for a conference in Vancouver.
  • Taken on a technical editing role with Better Software magazine. (Have great article ideas on testing, software development or management? Let me know – we’re always looking for great content.)
  • Had rewarding client work helping teams develop ET skills and lightweight test automation solutions.

I’ll be presenting at STARWEST in Anaheim later this month, and in Sweden at Oredev in November. If you see me at either conference, stop and say hi.

Exploratory Testing Article

Exploratory Testing: Finding the Music of Software Investigation is an article that explains exploratory testing using music as an analogy.

I first wrote this article over a year ago as an introductory article on exploratory testing. I wrote it with a technical audience who may never have heard of exploratory testing in mind. When one of my programmer friends exclaimed: “I get what exploratory testing is now!” I knew it was time to stop tweaking it. It has languished in my article drafts folder for months, so I decided to publish it myself.

I hope you enjoy it.

UPDATE: In the spirit of the article, and the work that I do, there are reports of the PDF not loading when using an older version of FireFox, with an older version of Adobe Reader. I tested it with Safari, Opera, FireFox and IE 7 with Adobe Reader 8. I get application errors on file load when using FireFox 1.5 and Adobe 6. If you have problems, use “File/Open” within the PDF reader FireFox plugin, which seems to work. That’s the workaround, I’m working on a fix. 🙂

Getting Started With Exploratory Testing – Part 4

Practice, Practice, Practice

To practice exploratory testing, you will find that you need to be able to generate test ideas, sometimes at a moment’s notice. The sky (or more accurately your mind) is the limit to the kinds of tests you can implement and practice with. The more you think up new tests, and learn to analyze software from many different angles, the more thorough your testing and the resulting discoveries will be. Like anything else, the more you practice, the better you will get.

Test Idea Strategies

Here are some test strategies you can use to generate test ideas to practice exploratory testing. Practice testing, but also practice thinking about testing, applying strategies and doing analysis work.

Concrete Strategies

Sometimes as a tester you need to grab and use specific test ideas off the shelf to get started quickly. Do you have a source of ideas for specific strategies tests sequences, or test inputs? Some test teams have quick reference lists with specific test ideas for their applications under test for this purpose. If you are practicing exploratory testing, and find you need some new ideas quickly, here are some places to look.
Michael Hunter has a fabulous list of test ideas in his You Are Not Done Yet blog series.

Elisabeth Hendrickson’s Test Heuristics Cheat Sheet describes “data type attacks & web tests”.

Karen Johnson has some concrete test ideas on her blog as well. Check out Testing by the Numbers, Chars, Strings and Injections, and ; SQL injections- -‘ for some ideas.

Concrete test ideas can be incredibly useful. Recently I decided to grab some of my favorite testing numbers off the shelf and input them through a test interface I had just developed (written in Java using a library which allowed me to write commands in JUnit tests.) I found a bug within seconds thanks to a testing interface that allowed me to use more speed and accuracy in my test inputs, and my off-the-shelf testing numbers.

Abstract Strategies

Abstract strategies are all about testing ideas. These require more thinking than a quick off the shelf test, but the benefits are enormous. Applying these kinds of strategies to testing can result in very thorough, creative, and disciplined test strategies and execution. You can use these strategies to help design tests in the moment, or while doing test analysis or test planning. Some examples include test models and heuristics.

James Bach’s Heuristic Risk-Based Test Strategy Model is a great place to start. In fact, James’ methodology page is full of great ideas for testers. I touched on the subject of a benefit of using models in this blog post Speeding Up Observation with Models.

Thanks to the work of James, there are several testing mnemonics you can use, or better yet, use as inspiration to create your own.

I touched on the subject of learning testing heuristics here.

Other models include coverage models, failure mode analysis, McLuhan thinking, and many others.

Language Strategies

This is another abstract strategy, but language is so important I decided to give it its own category. Examples of strategies are to look at the meaning of words, how they are put together, and how they might be interpreted in different ways. I use dictionaries a lot to help my thinking, and they can be a great tool to expose vague terms. I also use grammar rules when I investigate a product and look for potential problem areas. I engage in almost a type of “software hermeneutics.” The linguistics (study of language) field has a wealth of tools we can also use to help generate test ideas.

The requirements-based testing community have a good handle on testing written documentation. When I look at anything that is written down that has to do with the product, I carefully look at the wording. That includes in the software, in any supporting documentation, comments, tests, etc. and I look for potential ambiguities. For example, Bender RBT Inc, has a paper, the “The Ambiguity Review Process” which contains a section: “List of Words that Point to Potential Ambiguities“.

Once you find an ambiguity, can you interpret the usage of the software in different ways and test according to different definitions? This is often a surprising source of new test ideas. I use language analysis a lot as a tester, and it is a powerful method for me to generate test ideas.

Sensing and Feeling Strategies

Using Emotions

This is can include situations where our senses are telling us something that we aren’t immediately aware of. We feel a sense of discomfort, sometimes called “cognitive dissonance”. Things appear to be functioning, but we feel conflicted, sometimes for reasons that aren’t readily apparent . I’ve learned to use the “feeling that something is off” as a clue to start investigating that area much more heavily. It almost always pays off in the discovery of a good bug.

Michael Bolton describes this emotional side of testing very well:

…When I feel one of those emotional reactions, I pause and ask “Why do I feel this way?” Often, it’s because the application is violating some consistency heuristic–it’s not working as it used to; it’s not supporting our company’s image; there’s a claim, made in some document or meeting or conversation, that’s being violated; the product is behaving in a way that’s inconsistent with a comparable product; …

Update: Be sure to check out Michael’s excellent presentation on this topic here. It might change the way you think about testing.

Using Your Senses

Have you ever pushed a machine to its limits when testing and noticed something physically change in your test environment? This is also a trigger for me to push harder in that direction, because a discovery is on the way.
Here are some examples:

  • sluggish response when interacting with the UI, or an API
  • interaction slowing down
  • sense the machine labouring
  • hear the fan kick in hard when running a test
  • feel the device getting hot

An old rule of thumb when traveling is to use a map, but follow the terrain. Being aware of any strange feelings you have when testing (it feels off, let’s investigate this a bit more), and being aware of changes to the test environment are great ways to explore the system.

George Dinwiddie reminded me of a good rule of thumb:

As Bob Pease (an analog EE) used to say in Electronic Design and EDN magazines, ‘If you notice anything funny, record the amount of funny.’

As a tester, when I see something funny, my first instinct is to get the application to repeat the funny behavior. Then I observe whether follow-on testing creates even more funny behavior. In many cases it does, because bugs often tend to cluster.

Some of my most significant discoveries have come about because I paid attention to a weird feeling, investigated further, noticed something flicker out of the corner of my eye, or noticed that the log file on a server that I was tailing was writing at a strange rate, or that a certain action caused my machine to start laboring, etc. It’s often the subtle things that are the biggest clues to overlooked problems in the system, so it’s a good idea to practice your observation skills.

Oblique Strategies

These are ideas you can use to spur creative thought processes if you find yourself running out of ideas. Michael Bolton introduced me to these last year while we were working on a testing exercise. I was running out of ideas, and Michael sent me to this web site that generates oblique strategies. After a couple of tries, I ended up on a whole new idea set that jump-started my testing.

Wikipedia says this about Oblique Strategies: “…is a set of published cards created by Brian Eno and Peter Schmidt. Now in its fifth edition, it was first published in 1975. Each card contains a phrase or cryptic remark which can be used to break a deadlock or dilemma situation.”

When I was a kid, my family had a game where you would blindly open a dictionary, close your eyes, and put your finger down on a page. You would then open your eyes, look at the word closest to your finger, and then make up a short story involving that word. A variation involved doing this for two words, and then combining them to try to come up with something creative or funny. This is a similar kind of idea generating strategy to oblique strategies, but not as sophisticated. There are other, similar ways of getting oneself out of a rut.

With these sorts of tools you often need to be creative to apply your result to testing, but with a little bit of trial and error and creative thinking, you will be surprised at what you come up with.

Uncategorized, or Inexplicable Strategies

Sometimes, believe it or not, some of my friends dislike my need to systematize, categorize and neologize. They prefer to think that their testing ideas are the work of intuition, or are dropped to them by the testing muse, or the idea fairy. That’s fine, and I can respect that they think about things differently. Furthermore, I frequently miss out on ideas and insights that they have. I’ve added this section to include the mysterious, the unexplained, or any other test strategy I might not be aware of, or be hard-pressed to categorize.

What works for you? Do you use a strategy for testing that I haven’t touched on in this post? This list is in no way exhaustive, and it, like any other model, is imperfect. I’m sure my own model and strategies will change as I learn more, and get more feedback from colleagues, and you, the reader.

I hope this list of test strategies helps you as you practice exploratory testing. Test idea generation can be as creative as we permit ourselves to explore, and one of the wonderful things about software testing is that sometimes the weird idea combinations are the best ones. Furthermore, practice helps build the strength and agility of the most powerful testing tool we have at our disposal, the human mind.

Getting Started With Exploratory Testing – Part 3

Wean Yourself off Scripted Tests

Scripted tests, or test cases that are written in advance, contain procedural steps to execute, and usually have some sort of “expected results” at the end are very common. If you are a tester who is used to only using these kinds of test cases, transitioning to exploratory testing has special challenges.
One way to get started with exploratory testing is to change the way you implement scripted tests, with two goals in mind:

  1. to get a feeling for exploratory testing
  2. to be able to engage in testing without needing pre-recorded test scripts

Some testers can find it overwhelming and can almost feel a sense of panic when they consciously try out exploratory testing. Exploratory testing is actually a natural activity for us to do, but sometimes we are so conditioned to follow a script, and only test what we’ve been told to test, it can be a difficult transition to make at first. There are some useful things to remember:

  • If you feel overwhelmed by possibilities, stop, gather your thoughts, then pick one simple top priority item to test.
  • There is no right or wrong way of practicing exploratory testing; your own unique creative and idea generation methods are a source of strength.
  • Minimal planning can go a long way. To start, set out a time period (an hour might be a good place to start), and an area of focus in an application.
  • For your self-allotted time box of exploratory testing, try not to multitask. Stick to your area of focus, and resist the urge to check email, answer colleagues’ questions, or test other applications. If you find areas to explore outside of your area of focus, quickly take note of them, and return to them after your time-boxed testing session.

One way to do this is to encourage yourself to add more variability when exercising a test script. Elisabeth Hendrickson talks a lot about adding variability to testing in her work. One way to consciously add variability is to change the program inputs that are recommended in the test script you are following. You can change them slightly, or dramatically, feel free to experiment with different types, or use some of your “old favorite” inputs, write a script to generate test data, or use something like PerlClip.

Another way to consciously add variability into your test script is to vary the steps. Here is an analogy. A friend of mine is an accomplished musician with strong technical skills, but struggles with improvisation. Here is an exercise we did recently:

  1. I recommended that she take a simple music score (like Mary Had a Little Lamb), and try to play it note for note. I wanted her to play a rendition that she felt was as close to the way the author intended the sheet music to be interpreted.
  2. Next, I asked her to add ornamentation to her playing. (This is subtle variation, or a slightly unique style that comes out when playing an instrument, which tends to be unique from musician to musician.)
  3. Once she had done that, I asked her to play the song again, but to ignore some of the bars in the sheet music. I asked her to anticipate ending up on an important, signature part of the song so it was still recognizable, but encouraged her to do whatever came to mind to get there. It could be silence with counting and waiting; it could be one extended note; it could involve running up and down an appropriate scale; trying to play what they thought should be there from memory, or what was going on in her head.
  4. With that out of the way, I told her to throw caution to the wind and play with complete freedom. I asked her to play for roughly the same amount of time as the song might take, but to play whatever came to mind.

I asked her to record and send me her versions of the song at each step. The results were surprising and satisfying. In step 1, the song sounded as you would expect, but was a bit wooden, or forced, like a computer had played it. In step 2, the performance came alive. In step 3, there were flashes of brilliance and the odd spot where execution went off the rails. Part 4 was a huge surprise. My friend had been listening to progressive heavy metal earlier in the day, and the piece she came up with while improvising didn’t sound like Mary Had a Little Lamb at all. It sounded like part of a Dream Theater song. The notes were there, the scales were there, in fact, most of the framework of what Mary Had a Little Lamb is composed of was there, but it wasn’t recognizable.

However, it was incredibly effective as performed music. It was interesting, entertaining, creative, and it flowed with an effortlessness that resonates and connects with a listener. “Flow” is an important concept both for musical improvisation, and for exploratory testing.

With my musician friend, my goals were to get her to feel “flow” when playing her instrument, and to meld her thought processes with her physical manipulation of the instrument. Most importantly, I wanted her to let go of her inhibition when playing, to not worry about breaking the rules, and to just be free and have fun while achieving a goal. “Flow” is not only important in creative endeavors, it applies to any kind of human/computer interaction. This article is a good overview of the concept from a gaming perspective.

Do you have enough information to apply this kind of thinking to your testing work? If you’re still stuck, here are some things to consider:

Contrary to some perpetuated testing folklore, you do not need “Expected Results” to be written down prior to testing. James Bach says:

The expected result thing is interesting because we have a million expectations, mostly unconscious and emergent. To suggest that there are only one or two expected results is absurd. What I say is not that you need an expected result for a test. What you need is a reasonable oracle. This is a broader idea than expected result. By the way, your oracle is not necessarily needed before the test is executed. It can come at any time, even a week later.

Note: An “oracle” is defined as: “…a principle or mechanism by which we recognize a problem.” Also notice that in my example above, the “expected results” are not written down in the sheet music, they are implied expectations of the performer and the listeners. Not having them written down is no more an impediment to testing than it is not an impediment to performing music.

You do not need formal requirements, or a complete specification to test. As a software user and tester, you have what James Bach calls “reasonable expectations”. Michael Bolton has a nice article that addresses this issue: “Testing Without a Map“. You can use some of your scripted test cases as a place to start, but one goal for your exploratory testing should be doing something like what Michael describes here.

To start weaning yourself off of scripted tests, try to repeat what my musician friend did above, but with test cases you are familiar with. If you get stuck trying to transition away from a scripted test, here are some things to try.

  • Execute a test script, but stop part way though, and interact with the application at whatever point you are at, then come back to the test script. Does following the script feel different? Take note of that feeling, and try to replicate the feeling you get with exploratory testing more often in your testing execution, and when thinking about testing.
  • Stop part way through the script, and engage in another activity. Stare out the window, take a short walk, or read a testing article, and then come back to your test script. Instead of following the rest of the test script steps, focus on the first thing you see in the application, and interact with that component.
  • Memorize the test script, re-run it, and then get creative in your execution of it. Can you change it so much that it still meets the intent or goal of the script, but the original writer may not recognize it?
  • Try to be as observant as possible. Can you see anything different that you might not have noticed before? Stop and take notice of what the application is doing. Does anything feel different or suspicious to you? Is there a blink, or an oddity in the way a screen loads? If something feels funny, this is a good sign to investigate that thing that seems a bit off. Embrace those feelings when they come, that is often a sign that you are almost subconsciously observing something different.
  • Research other ways of interacting and implementing tests, and pre-plan a completely new strategy for testing that application. Can you use a different testing tool? Execute those tests through a different interface than the user interface? Once you have a new plan of attack, follow the intention of the script, but not the steps of the script using a different testing strategy or technique.
  • Focus on observing slight details when testing, especially ones you didn’t see before, and capitalize on exploring them. Practice developing your observation skills away from the computer, and apply your learnings to testing.
  • For something completely different, try testing James Lyndsay’s black box test machines. There’s nothing quite like testing something that has no written documentation at all to go on.

With time, you should find that you no longer need pre-scripted tests at all to begin testing. All you will need is software to test, and your testing ideas and thinking skills. If you can learn to alternate between both scripted and exploratory testing, your testing and your thinking about testing will change. I hope this will help testing become more interesting for you, particularly if you feel locked in scripted testing mode. I also hope that if you practice and think about testing, your confidence in your abilities as a tester will grow.

Getting Started with Exploratory Testing – Part 2

Apply Improvisation to Testing

How do I get started with exploratory testing? Here are is an idea to help you get started:

Study and practice improvisation, and apply your learnings to your own testing work.

Wikipedia describes improvisation as:

…the practice of acting and reacting, of making and creating, in the moment and in response to the stimulus of ones immediate environment. This can result in the invention of new thought patterns, new practices, new structures or symbols and/or new ways to act. This invention cycle occurs most effectively when the practitioner has a thorough intuitive or technical understanding of the necessary skills and concerns within the improvised domain.

I first consciously studied and practiced improvisation (improv) in middle school. In drama class, we were encouraged to study and practice improv to develop our acting skills. I enjoyed being in the moment, and working with whatever props, cues or events I had at the time to work with. I realized quite quickly that it is a lot harder than it looks; reading from a script and being directed on everything you do in a performance is very different. It’s easy to get over confident and then freeze in a performance if you are unprepared.

There are many skills that actors can develop to do improv well. Some of my friends practiced different techniques with their faces, posture, body language and voices so they could be even more versatile. Some could change the entire mood they wanted to convey with a few imperceptible changes in their demeanor. That requires practice and skill, but good actors are able to employ these kinds of tools almost effortlessly once they are mastered.

I then studied improvisation in music. After high school, I joined a band that needed a saxophonist. They liked to do extended jams, and it wasn’t easy to keep up at first. I was well practiced reading sheet music and had my parts all polished up, but I quickly fell off the rails when the band leader would lead improvised sessions. I’d miss a key change and do something dumb like playing something minor-dominant when we had changed to a major key, or get my rhythm mixed up because the drummer had changed timing while I was in the middle of a solo. After I practiced like mad, and learned a couple of patterns, I could cope a lot better with these free sessions, but I never had the chops to play in free jazz bands like some of my friends have.

Some people have said: “Oh, Jazz? Is that the music where they just make it up as they go along?” This is actually incredibly difficult to do. My friends who have played in jazz bands are technical monsters on their instruments. They have very strong knowledge of music theory, and can play their instruments with great precision and speed. I on the other hand, have a much more narrow grasp as a musician, but I can still do some improvisation. I could spend the rest of my life just trying to master guitar.

Like most things in life, improvisation can be done poorly, or it can appear effortless and mesmerizing when done by a skilled practitioner. It can be done effectively by people with various degrees of skill. I have musician friends who are musical and very strong technically on their instruments freeze up without sheet music. I also have musician friends who know very little music theory who can do very well at improvisation. I also have testing friends who have fabulous improvisation skills, and others who do it naturally and seemingly unconsciously, but effectively.

Improvisation is about being in the moment, and dealing with what you observe, think, feel and react to. As testers, we can be in the moment as we interactively test software, and we can be as free and unconstrained as other improvisers. To do it very well requires skill, but we can be effective and grow our testing skills if we learn and practice. We just need to be aware of the software we are testing, and learn to keenly observe what it is doing, and be able to react and create new tests as we interact with it. Like acting, music, or other improvisational activities, you will find you end up in interesting and rewarding situations.

*See Dictionary.com for definitions of improvise.