All posts by jonathankohl

Post-Agilism Frequently Asked Questions

Edit: Update – Jason Gorman weighs in with his thoughts in: Post-Agilism Explained.

What is Post-Agilism?

This requires a two part answer. Post-Agilism is:

  1. a growing movement of former Agilists who have moved beyond Agile methods, using a wide variety of software development tools and methodologies in their work.
  2. an emerging era. Now that the Agile movement has moved to the mainstream, what’s next?

Why Another Term?

I didn’t know of any other way to describe people who went through the Agile movement, and after a while decided they didn’t identify with being “Agile” anymore. They weren’t reversing back to big-up-front design, heavyweight processes, and were building on what they found effective in Agile methods.

“Post-Agilism” is a term I use that helps my thinking with this phenomenon. Some of the behavior is a reaction to dogmatic zealotry, like philosophical skepticism. For some reason, the term “religion” comes up an awful lot when “Agilism” is discussed, as described by Ravi Mohan, and Aaron West.

I’ve also seen behavior towards processes that is like scientific skepticism, or what might be better described as falliblism, that seeks to question process claims through investigation and scrutiny. That’s the “process skepticism” side. “Does this process ‘X’, as one of many tools we can use help us reach our goals of satisfying and impressing the customer? If not, why? What can we try that might work better?” I originally blogged the neologism “post-Agilism” in the hopes that it would help spur people to try out new ideas, and encourage those who were tired of Agile methods and wanted to build on them and move forward.

Jason Gorman and I both independently thought of the term “post-Agilism” because we were afraid that the industry would stop innovating. We both felt stuck in the “Agile” rut, and knew others who had worked through the Agile movement, and then said: “That was fun. Some things worked well, others not so well. What’s next?” This is sometimes the curse of the early adopter. What do you do after “Agile”, particularly when you feel that the innovation that the Agile movement injected into the software development world has slowed down, or maybe even stopped?

Gorman and I both came to use the term by looking at modernism vs. post-modernism in areas like architecture and art. Agile to us felt like modernism in architecture and art. Progressive, but with rules around the values and frameworks of implementation. There was this other thing going on though, where the rules of Agile were broken, and people created hybrids, or mashups of processes. This sounded more like the freewheeling, anything goes values of post-modernism. Hence the term: “post-Agilism.”

Isn’t the term “post-Agile” an oxymoron, or a fallacy?

It is if you use the dictionary definition of “agile”, but Brian Marick explains the difference of meaning of “Agile” (capital-A) quite well here:

It really gripes me when people argue that their particular approach is “agile” because it matches the dictionary definition of the word, that being “characterized by quickness, lightness, and ease of movement; nimble.” While I like the word “agile” as a token naming what we do, I was there when it was coined. It was not meant to be an essential definition. It was explicitly conceived of as a marketing term: to be evocative, to be less dismissable than “lightweight” (the previous common term).

…That’s why I habitually capitalize the “agile” in Agile testing, etc. It doesn’t mean “nimble” any more than Bill Smith means “a metalworker with a hooked blade and a long handle.

Note: I consider Brian to be one of the good guys – he is not an Agile marketer, but someone who is highly skilled at what he does, and is dedicated to driving the craft of software development forward. He has had an enormous influence on my career – Brian encouraged my exploratory testing efforts on Agile teams in particular. If you haven’t read his work, you should check it out on www.testing.com and www.exampler.com. From what I understand, Brian is as against the hype and dogma as we are, if not more so.

Jason Gorman explains further:

Adapting to circumstances is an agile strategy that can significantly improve our chances of success. But post-Agilism – [Agile] with a capital ‘A’ – is not about strategies. It’s about hype and it’s about dogma, and how they can – and, let’s face it, actually have – put a choke-hold on genuine innovation within the Agile community over the last few years. This is not what the Agile pioneers envisaged, I suspect. …I remain staunchly post-Agile, and see no fallacy in remaining decidely agile to boot! Just because you don’t like Big Macs, it doesn’t mean you hate beef burgers…

Is Post-Agilism Anti-Agile?

No. In fact, it can help preserve good practices that were popularized by the Agile movement in the face of a backlash. It’s better to think of it in terms of “after Agile” rather than “against Agile”. As Jared Quinert says:

When I started referring to you and others as ‘post Agile’, I used it to mean that you were the people whose thinking had moved on, that you were thinking about software development after Agile, and were reacting to stagnation. Some were uncomfortable of acceptance of Agile as an unquestionable best practice, not a solution to specific problems, or a set of principles which may or may not help your unique project.

Some people have expressed fear of returning to waterfall or phased approaches. This is a fear I share, and post-Agilism is a potential way out of that seemingly binary choice between Agile methodologies and big design up front, heavyweight processes. There are more than just those two choices, and in companies that have been burned by bad Agile implementations, it is tempting for them to throw the baby out with the bathwater. Post Agilism is one of many ideas to help counter that. Take the good, move forward and improve, don’t go back to the processes that weren’t working before just because you’ve had problems.

This thought of “after-Agile” isn’t so much a threat to Agile practices as an assimilation of them, combined with other ideas. Jason Gorman:

Those of us who consider ourselves “Post-Agilists” have taken what worked and cross-bred it with the best bits of dozens of other approaches and disciplines, creating new variants that have the potential to be even more exciting, daring and shocking.

I’ve heard post-Agilism referred to as a “process mashup” by others.

Post-Agilists aren’t necessarily “anti-Agile”, in fact they tend to incorporate a lot of Agile practices on projects, as well as other practices they find useful. Jason Gorman:

The Agile movement has successfully challenged the existing order and shaken the software industry out of a potential rut, bogged down by outmoded 19th century industrial thinking and “big process” dogma. It has opened the door to a very wide range of possibilities, and is now the catalyst for a Cambrian explosion of new ideas on how to deliver software and systems with bizarre, exotic-sounding names like Pliant Programming and Nonlinear Management.

Is Post-Agilism Superior to Agilism?

No – this isn’t about value judgments or superiority. Post-Agilism is a descriptor of something that we have seen and experienced. I’m personally not saying one thing is better than the other. I’m just saying people have moved on from Agilism for whatever reason. Who knows what the future will bring, and what will be remembered as being successful or “good” or “bad”. I’m ambivalent about it, but it is not a movement I started, people were displaying this kind of behavior long before I identified with it. I don’t think everything post-modernism brought us was necessarily good, but I like the fact that it added more ideas to the mix. Jared Quinert expresses ambivalence about the term: Maybe we are in “late agilism”. … “It’s bound to be renamed by someone else one day anyway.” We can be as wrong about any topic as anyone else.

Isn’t This Really Just “Pure Agile”? Aren’t You Just Reacting to Agile Corruption?

No, we are reacting and adapting to experience on our own projects, and to change. Furthermore, post-agile thinkers I’ve spoken to tend to be contextualists who, like the Context Driven Testing community, believe the value of a practice depends on its context.

Ravi Mohan:

…each agile practice (or the whole lot together with an appropriate label) makes sense only in certain contexts (certain types of enterprise software, with certain types of teams) even in the “uncorrupted”, “pure” state. A “pure agile” process is not superior to a “non agile” process de facto. Agile is not the “best way we know how to create software”.  It is one way we know how to create software. It has no intrinsic superiority (except against obvious straw men like “pure waterfall” for projects with rapidly changing requirements). “Post Agile” is just an adjective that describes people who have used agile extensively, adopted what made sense, rejected the parts that didn’t work (and the hype) and choose to think for themselves. It is not a reaction against the perceived corruption of an originally perfect process. (From comments on Vlad Levin’s blog.)

A popular misconception is that if you are using an iterative lifecycle with incremental delivery, focus on communication, customer involvement, value testing, and delivering great software, then you are, by definition “Agile.” The Agile movement did not create these practices, (nor do prominent Agilist founders claim to have invented them) and it does not have sole ownership of them. Many of us were doing these things on projects in the pre-Agile era. In my own experience, I was on teams that used iterative lifecycles with two week iterations in the late ’90s. We looked at Rapid Application Development and adopted some of those practices, and retained what worked. We also looked at RUP, and did the same thing, and a big influence at that time was the Open Source movement. If you go back to the ’60s, thirty-forty years before the Agile Manifesto was created, Jerry Weinberg describes something very similar to extreme programming on the Mercury project. That doesn’t mean the Agile movement is wrong, it just shows that there are other schools of thought other than Agile when it comes to iterative, incremental development.

The Agile movement did not invent these practices – they’ve been around for a long time. Some of us were very excited about what the Agile movement brought to the industry, because we had also been working in that direction. What the Agile movement gave to us was a shared language, a set of tools and practices and advances in these techniques that can be very useful. The Agile movement has given us a lot of innovative ideas, but we can look at pre-Agile and Agile eras for great ideas and inspiration.

Another reason that some react negatively to the Agile movement as a whole is the “higher purpose” vibe that seems to emanate from Agilist gatherings. Michael Feathers put words to this, he described it as a “utopian undercurrent”, which is a brilliant and appropriate use of languge in this case. On the Agile Forums, Micheal Feathers said this on a thread:

… I think that there has been an undercurrent of utopianism in the agile community: “If only we get software development right, our businesses will succeed and the the death march will be relegated to the scrapheap of history.” As much as I’d like to believe that emotionally, I recognize that there is enough flux in an economy to make those sorts of states intermittent and somewhat unpredictable. There will be times when a team will be perfectly aligned with the surrounding organization, when the code will fly out of the finger tips and everyone will be happy. But, in general, organizations often act badly under stress. Projects can fail for reasons totally unrelated to a team’s performance.

This is an important insight. Tim Beck responds to a thread series on the Agile Forums where Brian Marick says:

Whereas the number of people new to Agile who describe their project as “the best project I’ve ever worked on” seems to be declining, and we believe work should be joyful…

Tim says:

I could be my typical snide and sarcastic self and say something in mock shock and awe, but I’m going to resist this time and actually applaud this important recognition. Agile isn’t all it is cracked up to be. It doesn’t always produce successful projects. It doesn’t always produce happy programmers. It doesn’t always produce delighted customers. The sooner we realize this, the sooner we can move on, or in the case of the Agile Alliance, the sooner they can attempt to patch up the beast.

Tim and I both agree with Brian that work should be joyful, that you should be able to enjoy what you are doing. Tim’s point is that just because your team is Agile, it doesn’t mean that you are guaranteed to have joyful a work environment. In fact, some Agile projects are as political, soul-crushing and demoralizing as any other project. It can be annoying to deal with that utopian undercurrent, particularly when people refuse to deal with issues because “we’re Agile!” – preferring an ideal over reality. Brian’s point is an important one to look into. Why does it look like this has changed over time? What is the real problem?

Won’t This Cause Harm to Agile Methods?

The value of Agile methods can stand on their own. Post-Agilism is just a term to clarify why some people choose to use Agile methods, and move on. There are still a lot of people who are perfectly happy with Agile methods. We are just reminding people that there are alternatives.

I’ve found the term has helped drive out confusion over mainstream Agile views, and this other group of activities that were similar, but didn’t really fit. If others find the term confusing and use it as an excuse to not even try Agile methods, that is their choice. They would probably find a different excuse if this term wasn’t around. If they are serious about trying Agile methods, this term should merely tell them that they are already behind the curve, and should hurry up learning something new in the hopes of improving.

One thing I have noticed is that Agilists are now looking at defending some of their positions, and looking for evidence to back up claims. I think that’s great. A little healthy skepticism and introspection can make good things stronger, and weed out practices that aren’t so good. If the term “post Agilism” helps people improve their work by making it defensible, that’s a good thing. Some of the criticism and skepticism is helping Agile practitioners improve their own work.

I also have trouble making a value judgment on behavior and activity that is going on as time passes and people embrace Agile methods, adapt, and some move away from that school of thought. It’s a descriptor, not a call to arms. The market will reward and punish our ideas over time. New ideas and experimentation aren’t bad things to me – stifling innovation is. One of the gifts the Agile movement gave us was an escape from the rut software development had fallen into. One of the dangers is apathy and complacency if Agile methods are touted as “the best way to develop software.” If we are already the best, why improve?

Where is the Post-Agile Manifesto?

There is no manifesto. This is not an organized group – it is a phenomenon of people around the world who have some sort of Agile experience, and have independently moved on. We are now discovering that there are more people doing this. Most post-Agilists I’ve talked to still value the points in the Agile Manifesto – they see it as a good start, but not the final answer. Furthermore, the Agile Manifesto is a bit like world peace. It is difficult to disagree with, but many disagree on implementations. Post-Agilists seem to want to expand the idea space on what those kinds of implementations might be.

What’s The Formula? Can I Buy the Process?

There is no formula. Software development is a complex business, with many factors. We’ve spent a lot of good effort in methodologies and tools, but there are many more variables to be aware of. Tools and processes are important, but it’s up to you to find out which ones work for you, in your context. I echo the Pragmatic Programmer’s advice for learning a new programming language every year, and Alistair Cockburn’s advice to try out and learn a new process frequently. Grow your toolbox. Focus on what goals the business is trying to achieve, and see how your technology, process and tool decisions help reach those goals – not the other way around.

Tim Beck has one way of putting this:

I’ve said it before and I’ll say it again… I don’t know how you should build your software. No one but you can figure that out.

Isn’t Post Agilism Going to be Corrupted Too?

I hope the post-Agilism movement is a transitional movement that gets us back to worrying about good software development, instead of worrying about what pure “Agile”, “waterfall”, “RUP”,”CMM” et al “recommend” we do. Sometimes I’m not sure if post-Agilism is a movement as much as a phase. I started seeing post-Agilist behavior in about 2003, and it has grown organically without having founders, a manifesto or group of values to shepherd it. I’d rather characterize post-Agilism as a descriptor of something we see, rather than a goal we want to attain. If we see people who are fluent in Agile methods, but also draw from many other sources as they strive to develop the best software they can, we might say: “Look, there, that is post-Agilism in action.”

However, now that the cat is out of the bag, people will do whatever they want with the terminology. I’ve heard of Agilist marketers complaining that “Agile” is no longer a differentiator for them in the market. (Sometimes I wonder if there are there any software companies left that don’t call themselves “Agile”.) I imagine if the “Agile” branding devolution continues, someone is going to use “post-Agilism”, or some other term to fill the void in a similar way, along with other “new” ideas. The abuse will begin, the reactions will begin, and hopefully we just move beyond it, and forget about terminology and focus on building great software. It’s just the way of the world – people need to make money somehow, and time marches on.

What Does Post-Agilism Look Like?

Kathy Sierra was one of the first people to raise this issue. It’s a good one. My first answer was: “Let’s all find out together.” I still believe that, and while I have more experience and ideas now, I’m more interested in what you think. What are you, the reader doing or observing that looks like post-Agilism? What do you think it looks like and means? What creative combinations and cool amalgamations are you using as you experiment to improve your software development efforts?

James Bach has said this about post-Agilism:

It’s not a declaration of a new world order, it’s a transitional strategy on the part of and for the benefit of people who feel that Agile has lost its way.

Jason Gorman says: Post-Agilism is simply doing what works for you.

There are a lot of ways that people have moved on from Agilism and are doing something new. A common theme is to look at processes as a constantly evolving set of tools that need to be adapted to reach the goals of the software team and the business. This is often a fluid combination that extends beyond popular Agile definitions. For example, Alistair Cockburn was recently interviewed on “What’s Agile and What’s Not“. The answers are interesting. I particularly like his top ten list for figuring out whether your team is Agile or not. From a software development perspective it is fairly narrow, which makes it easier to decide whether you are Agile or not. Where it gets interesting, particularly when it comes to testing, is in point number 6. The “Agile Testing” view of testing is obsessed with automated testing, and it can be difficult to introduce other kinds of testing ideas in that community.

Back in the old days (for me circa 2001-2003) on Agile teams, if you were on an XP team, you could expect that kind of thinking on testing, but on other Agile projects, you had a lot more freedom to experiment to see what worked or not. That freedom to experiment as a tester on Agile teams is disappearing. Now what I see is an obsession with automating basic functional tests, that have to be “test-first”, which takes me back to the dark ages of testing where we pre-scripted all tests. That didn’t work so well, because we were only using confirmatory tests, and it forced testing to be predictive rather than adaptive. On Agile teams in the old days, I wasn’t so constrained, and it was nice to be able to have the freedom to be an investigative tester rather than just be a rubber stamp.

Nowadays, it seems testing ideas on most Agile teams I encounter are obsessed with automated unit testing, and are often forcing FIT way beyond its capabilities to meet some functional or acceptance test-first ideal, and the resulting maintenance issues that come along with it. (It’s not uncommon to see the top programmers spending all their time trying to get the FIT tests to work while the other programmers, business analysts and testers wait, sometimes for days on end for them to get their FIT infrastructure to work so they can write the code to satisfy the story.) One colleague of mine waited over a week for one report to be added to a system, 7.5 days were spent getting FIT to once again work with their system, and the other half day involved writing the story, and the actual program code. They remarked that this must be “test-dragging development”; at first, it was quite seamless and efficient, but as time went on, the maintenance costs of the automated tests were prohibitive.

There is some hope however, as exploratory testing seems to be gaining some currency in the Agile community. Most of the time though it feels like the Agile Testing ideals of “test-first” and “100% automation” win out, even if you have to force testing into a narrow definition to do it. That’s fine if that’s what you want to do in testing, but for some of us, that isn’t the be all and end all for testing. We see those ideals as part of a possible testing strategy. There are other examples other than testing where this stagnation, or narrowing of definitions of activities are taking hold in Agilism.

“Post-Agilism” is a term that gives permission to those who feel they are in the stranglehold of Agilism to move on and follow something that doesn’t seem to have currency in that community. It reminds them that there is more to software development than just the Agile movement.

Here are some other thoughts from around the web:
From the Software Underbelly blog: Post-Agilism:

Well, Halleluiah, I found some people talking about something other than following this ridiculous religion of Agilism. They saved me from one of my biggest rants. The original purpose of this column was to lambaste all the dogma drenched blather of Agilism. The loudest nails on the chalkboard for me was this attitude that process was EITHER predictive OR adaptive, i.e. my way or the highway.

…with Post-Agilism we take the good ideas but get back to looking at what works in a particular environment. I have always preached that good process is not from a book but an evolution. You design a process at the beginning as best you can and then continually adapt it as you go through releases. The process defined at the beginning is not nearly as important as how the evolution of that process proceeds from that moment forward.

Some people think that Lean is Post-Agile:

The Agile movement has given us some big advances, and also some big distractions. The good news is that we are perfectly free to toss out the distractions and substitute something better.

I’m personally not a big fan of Lean in software development, but he makes some interesting points.

This post “being agile in methodology” underlines what Post-Agilism can mean:

So, when are you using Scrum as methodology? In my project, we use lots of scrummish terms and tools, but for us being agile is also being agile in process. Why use the waterfall methodology when it comes to process? Why should we stick to tools we don’t find comfortable? We don’t. Our guideline is make it work for our team and our project. But when have we wandered so far of the beaten path that we no longer can call the methodology Scrum? Let’s wait ten years and see what it is called.It is good to know that we are not alone in this process, it actually has a name (though scorned by my team members) it is called post-agile…

In another example, Tim Beck decided to found the Pliant Alliance. Tim says:

Post-agilism to me is the more general description for what I did in founding the pliantalliance.org. I moved past Agile and started to think about what was good about it and what wasn’t. I started encouraging others to do the same. It turns out that I wasn’t the only one doing this and Jonathan came along and coined the term ‘post-agilism’ to describe what we all were doing.

What is Pre-Agilism? Isn’t that just waterfall?

Pre-Agilism is the period of time before the Agile movement was founded. This gets a little tricky though, because the Agile Manifesto signatories were also commentating on something they were witnessing and experiencing. Practices that were started prior to the “Agile” term creation were included under that umbrella. However, there are a lot of areas over the history of software development we can draw from.

Prior to the Agile movement, there were projects that used iterative lifecycles, were concerned with testing, customer involvement, and things of that nature. When I first read Martin Fowler’s The New Methodology, when I saw the first section “From Nothing, to Monumental, to Agile”, I interpreted that as describing the mainstream. There have long been projects and development ideas that rejected heavyweight, waterfall approaches, even when a phased “waterfall” approach was the dominant theory. I was on some of them in the late ’90s. We looked to the Open Source and Free Software communities as well as thinkers like Barry Boehm, Jerry Weinberg, Tom DeMarco, Tim Lister, Alan Cooper and others (including influential testing thinker James Bach) for ideas and inspiration.

That was an exciting time, and the Agile movement emerged as a dominating force in the iterative vs. waterfall debate. There were a lot of ideas that were experimented with and talked about in the fringes at that time. Some of the most influential for me were the usability and context-driven testing ideas. Going back further, there are classics on computer science that still have relevant lessons for us today.

There were also many who were using textbook waterfall phased approaches that were heavyweight. That was certainly a dominant thought in the late ’90s when I entered the game. The Agile movement was a breath of fresh air for people in those kinds of projects, and provided some great tools for dealing with this, particularly when it just wasn’t working. Here was an alternative that had a lot of credibility.

Why Aren’t You More Clear About This? Where Are the Examples?

I’ve held back a bit deliberately on my own ideas and examples because I wanted to see more ideas come to the fore. This is occurring slowly. I’ll add more on how I view software development and a post-Agile example or two in time.

What Do You Hope to Achieve?

With Post-Agilism, I just shared an idea I used to help reconcile something I was seeing happening, and experiencing myself. Others have found the term resonated with them, and they identified with what I was saying. Many have said they did find it encouraging. That continues to be a goal, to encourage those who want to improve software development in general, whether they are Agilists, post-agilists, or (insert fancy term here)-ists.

Others have reacted negatively to the term, and have started debating the value of it, as well as the value of the methods they believe in. I think that debate and communication of ideas is a wonderful thing. I hope we hear more about software development process and tool failures, not so much to say “I told you so!” or poke fun, but so that we all can learn.

This is what I mean by learning from mistakes. I have friends who are pilots. Any air accident is shared across the industry, and they openly share problems. In fact, they have to. One of my pilot friends told me that they are able to innovate because they focus on problems, and how to eliminate them and improve on them. One of my pilot friends said that this is part of the lifeblood of innovation in the aircraft industry. In software, we often downplay the problems, and vilify people who bring them to the forefront. We can learn a lot from our mistakes, and collectively move forward and innovate by looking at areas where we are weak. If this term helps spark some useful debate, collaboration and knowledge sharing to help us overcome areas we are weak in, I think that would be great.

At the end of the day, Post-Agilism is just an idea, or a model that some of us find helpful right now. Don’t find it helpful? Don’t worry about it. We aren’t trying to change minds, we’re just trying to get people to think about what they are doing. If Agilism works for you, great! If something else does, that’s great too. There are a lot of good ideas to choose from, particularly if you expand your view.

Maybe one day, we’ll just get back to calling all of this “software development” and we’ll pick and choose the right tools for the job to help us reach out goals. We don’t have to pick sides – there are great ideas to be found from all kinds of sources that we can learn from and try.

Test-Driven Development and Exploratory Testing

I’ve spent time working on teams practicing Test-Driven Development. Since I am a tester, and the practice has the word “test” in it, I’d only been on Agile teams for a short time before I was enlisted by programmers to help them out with test idea generation during Test-Driven Development.

At first, it felt a bit foreign, particularly when the programmers were using tools and programming languages I wasn’t that familiar with. In fact, it didn’t really feel like testing at all; it seemed like a programming style that was heavily influenced by object-oriented programming techniques. Once in a while though, it really felt like testing, the kind of testing I was familiar with. The programmers would turn to me to get me to drive when pairing, and were acutely interested in test idea generation. This was where using exploratory testing techniques, particularly rapid testing techniques really shone. Since we were developing new features quickly, the faster I could generate useful and diverse testing ideas, the quicker we could generate tests, strengthen the code, and improve the design. When we went back to developing a new feature though, I again felt out of place, particularly when I distracted and frustrated development at first.

I decided that the only way to understand Test-Driven Development was to immerse myself in the practice, so I paired with an expert programmer who had experience teaching TDD. I describe some of my experience in this article: Test-Driven Development from a Conventional Software Testing Perspective, Part 1. I found that I was still struggling with parts of TDD, and I felt like I would contribute very little at some points, and contribute a great deal at others. William Wake cleared this up for me. He explained two phases of Test-Driven Development: the generative phase and elaborative phase.

Generative tests are the first tests that are written, primarily about design. They’re “shallow sweep” tests that yield the initial design. Elaborative tests are deeper, more detailed tests that play out the design and variations.

Transitions between these phases can occur at any time, which is why I hadn’t noticed the change between my testing idea generation providing more or less value. At some points I would actually distract the programmer with too many test ideas, and at others, they would sponge up everything I could generate and would want more. I had to learn to watch for patterns in the code, for patterns in our development, and to work on my timing for test idea generation.

When we were clearly in the elaborative phase, our testing was much more investigative. When we were in the generative phase, it was much more confirmation-based. In the elaborative phase, we were less concerned with following TDD practice and generating code than we were with generating test ideas and seeing if the tools could support those tests or not.

To understand more, I decided to do TDD myself on a test toolsmith project. I describe those experiences in more detail here: Test-Driven Development from a Conventional Software Testing Perspective, Part 2. This experience gave me more insight into the testing side of TDD.

One day while developing in the generative phase, I was suddenly reminded of pre-scripting test cases, a practice I did religiously in the beginning of my career before fully embracing investigative exploratory testing. Instead of recording tests in a document or in a test case management system, I was pre-scripting them in code. Instead of waiting for a build from the development team to run the scripts, I was running them almost immediately after conceiving them and recording them. I was also recording them one at a time and growing a regression test suite at the same time as creating scripts. I realized that an assertion in my tests was analogous to the “Expected Results” section of a test script, an idea popularized by Glenford Myers in his book “The Art of Software Testing”, published in 1979.

When I moved to the elaborative phase and used different kinds of testing, glaring weaknesses in my test-driven code became apparent. In the elaborative phase, I didn’t care about expected results nearly as much as the unexpected results. The testing here was investigative, and if I pulled off my programmer hat long enough, I could go quite deeply with exploratory testing techniques. When a weakness was exposed, that meant I had missed test(s) in the generative phase, and I would add new test(s) to cover those missed cases to my unit testing code.

The main difference between what I was doing in code and what I would normally do with a program was the interface I was using for testing (I was using code-level interfaces, not the GUI), and I was the one fixing the code whenever I found a problem. Since the tests were highly coupled, and expressed in the same language as the program code, test case execution and maintenance wasn’t as disconnected.

I was also less concerned with my test case design up-front in the elaborative phase. It was much more important to use the programming tools to allow me create different kinds of test cases quickly, or on the fly. Once I found something interesting, and decided to record a test as a regression test, I would convert the test into a proper unit test with setup method, a test method with an assertion, and a teardown method.

Now that I’ve had some experience myself with TDD, and have worked with skilled practitioners, I have a better idea of what it is, and how it can relate to exploratory testing and scripted testing. There are a lot of elements of pre-scripting test cases in the generative phase, or scripted testing, including some of the drawbacks of that practice. Over several releases, pre-scripted test cases tend to have an increasing maintenance cost. (Note, Story-Test Driven Development is another form of pre-scripted, confirmatory testing which can exacerbate the maintenance problems we had with pre-scripted tests in pre-Agile days. They can be extremely maintenance intensive, particularly over long projects, or over multiple releases, robbing resources from new feature development and investigative testing activities.)

The elaborative phase was a natural fit for exploratory testing, and a great place for non-technical testers to jump in and help generate testing ideas. The challenge with this is figuring out what is good enough, or when to stop. Investigative testing is only limited by the imagination of the people involved in the activity. When there are two of you, a tester and a developer, it is easy to quickly generate a lot of great ideas.

It is also common for TDD practitioners to skip the elaborative phase for the most part. In this case, TDD is almost exclusively about design, with little testing going on. The “tests” are mostly a type of requirement, or examples. I’ve also seen some programmers abandon elaborative tests because they failed, while the generative tests passed. Others just didn’t do much elaboration at all, partly because they were so focused on velocity that they wanted to get a story written as quickly as possible.
In other cases, the phases were more blurred, particularly with programmers who felt that testability improved their design, and were open to a lot of different kinds of testing techniques. In fact, because the feedback loop with pre-scripted generative tests is so tight, a programmer who is experiencing flow may be utilizing exploratory thinking while developing new features.

I talk more about some of my opinions on TDD from a tester’s perspective here: Test-Driven Development from a Conventional Software Testing Perspective, Part 3. This reflective article spawned some interesting ideas, and while recording the skepticism and cautionary points, I had a lot of ideas for new areas of work and research.

Several TDD practitioners have approached me about exploring the elaborative side of TDD with skilled exploratory testing to address some of the problems they were having with their code. One area I consistently hear about are tests that yield a brittle design. Once the code is put into a production-like environment, testing revealed a lot of errors they felt would have been better addressed during development. The programmers felt that with more exploratory testing with more elaboration would combat this. Using more elaborative tests, more often, might be one way to maintain exploratory thinking in an emerging new design. Instead of a narrow focus of “write a test, run it, when it fails, write code so it passes”, they felt that exploratory testing would expand their exploratory thinking while programming, with a better overall design as the desired result. Rapidly transitioning from generative design thinking to elaborative, exploratory testing is one area to explore.

This intersection of investigative testing and programming holds a lot of promise for practice and research. I encourage interested testers and programmers to work on this together. I’ve certainly learned a tremendous amount by working in this space, and the testing community has benefited from the subsequent improvement in practices and tools. In fact, recently as I was rapidly writing web services tests in Java/JUnit, I recalled how much work it was eight years ago for a testing department to create a custom test harness in Java before we could even write tests.

Modeling Test Heuristics

Mike Kelly has created an effective software testing mnemonic based around James Bach’s software touring heuristic. It is now burned indelibly on my brain: FCC CUTS VIDS. I think Mike hit a home run with this one – I use it a lot myself. The mnemonic makes more sense when you review Mike’s excellent explanation, and when he spells out the mnemonic into a list:

Feature tour
Complexity tour
Claims tour

Configuration tour
User tour
Testability tour
Scenario tour

Variability tour
Interoperability tour
Data tour
Structure tour

Like all good mnemonics, it is easy to memorize because it evokes familiarity, imagery, and it has a nice cadence. I should qualify that. It is easy for /me/ to memorize, and of course, as the creator, easy for Mike to memorize. I relate to words and lists, and a nice cadence helps me repeat it silently to myself, in English, the language I tend to think in. My friend Steve, one of the smartest people I know, does not have as much self-talk going on in his brain. He doesn’t relate much to lists and words, he relates to images and colors much more than I do. He thinks in pictures more than words. If I were teaching Steve the touring heuristic, and hoped he’d memorize it, I’d take a different tack.

To explore Mike’s test heuristic in an alternate form, I created the following image, in the form of a mind map, as an example:

This image is a model my friends who think more in terms of images and colors might be more comfortable with than a list.
As I look at it, I see some cool side effects. As a mind map, it looks unfinished. This is partly by design; I didn’t expand any of the idea points as you normally would, and there is a blank spot at the bottom right hand corner, an area that our eyes are naturally drawn to as we scan an image.

Contrast the image with the list above. In the list, I get a sense of completion – each letter of the mnemonic is spelled out. When I get to the final “S”, standing for “Structure Tour”, I get a sense of satisfaction. The spelling out of the mnemonic is complete. When I look at the image, I am struck by how unfinished it looks. I want to draw it on a whiteboard and expand each idea bubble. I want to fill in that blank spot at the bottom right-hand corner. I want to explore the heuristic further, and I want my image to be symmetrical so I can easily remember the details by the shape and colors in the model.

My next urge is to re-create the mind map using only images instead of text. That would be an interesting experiment. Next, I want to convert the list into a tree diagram, with all the relevant testing ideas I can fit under each item. A tree diagram is also an effective way to quickly spot areas that need to be expanded. This appeals to my pragmatic testing approach – more analysis can help me be that much more thorough the next time I use the mnemonic when testing.

Different models help us look at problems differently, and they will reveal or obscure different information and ideas. As my father taught me years ago, involving all your senses when learning is effective. I constantly analyze my own models, and try to improve on my technique. If a model is getting stale, all I need to do is change the medium, and something extraordinary usually clicks in my brain as a result.

What do you see in the image and list above? Do they evoke different reactions in your mind? How would you model Mike’s mnemonic? What tools would you use, and what would you learn about testing and thinking by employing them? How many of you would write a song to accompany it? There’s only one way to find out. Model away.

*Thanks to Sherry Heinze for reawakening my interest in mind maps.

Learning Testing Heuristics

When I was in junior high school, one of my tasks was to learn metric conversions. My science teacher at the time took this task seriously. To help us remember the kilo, hecto, deca, deci, centi, milli prefixes to meter, liter and gram, he had us memorize a mnemonic. The mnemonic looked like this: KHDMDCM, and there was a sort of song that went through it: “King Henry Danced Merrily…something something something”. Trouble was, I barely had the mnemonic memorized in time for a pop quiz, and I couldn’t remember what each letter stood for. I ended up only being able to write the King Henry song out on my test paper and handing it in. I couldn’t do the required conversions such as decimeters to meters, or kilometers to centimeters. My teacher gave me a cold look when he handed the exams back, and admonished me to stop fooling around and actually start trying. Trouble was, I was trying. I just couldn’t get that stupid mnemonic to make sense in my brain.

At home that night, I sang the King Henry song over and over, repeating what each letter stood for. It was hopeless. I had to constantly look at my cue cards.

My older sister happened along and asked: “What on EARTH are you doing?”

I explained.

“Don’t do it that way,” she said, and quickly sketched out a diagram.

She wrote out all the measurement prefixes from largest to smallest, with “meter, liter, gram” in the center, and drew an arrow going up to the left on one side, and down to the right on the other. I stared at the diagram for a few seconds, and she had me reproduce it cold. Within minutes, I had the image memorized. She then taught me how to use it for conversion. It was a snap. On my metric conversion exam later that week, I ploughed through the questions with ease. “Forget King Henry,” was my thought, “this works way better.” And, forget him I did.

However, over twenty years later, I can still scratch out that conversion diagram and convert deciliters to decaliters, or kilograms to grams on the back of a napkin with relative ease.

“Different people learn differently” my father once told me. His first career was a school teacher, and over twenty five years he taught hundreds of different students. He was surprised by little, and was quite pragmatic. If colors helped you learn and remember, use colors. If mnemonics and little songs worked, try them. If images worked, go with the images. Try to involve most of your senses if you want to remember something, he said. Furthermore, if the system you are trying to use doesn’t work for you, make sure you clearly understand the concept, and then create your own. (These didactic sessions usually came from the depths of a newspaper, my Dad gazing at me over the rim of his glasses with CBC Radio blasting in the background.)

When I started working with James Bach, he talked about using mnemonics for testing. I was familiar with, and had used most of his published mnemonics, but I had never memorized them. I used them more for analysis prior to testing. They seemed suited for analysis and test planning, and I was able to use them to generate extremely effective test strategies and plans.

Then one day I saw James at work, testing software. He was thinking out loud, using test heuristics, rattling off testing mnemonics, and simultaneously designing and executing tests under each letter of a mnemonic in real time. He would pause periodically to point out his findings or jot down notes. He would generate and execute many test ideas in different categories in a short period of time. This resulted in surprisingly thorough testing, with test ideas no on else had thought to try. James’s testing provided a veritable treasure trove of bugs within minutes. Whoa.

James was using SFDPOT (San Francisco Depot), and he would say the first letter of the mnemonic out loud like this:”S – SYSTEM!” Then he would mutter and brainstorm and figure out ways to test the system part of the application. He might alter the files in an installed app, or remove dependencies. He would use system tools to monitor memory use and CPU usage for example, while testing. If he saw that memory use jumped up when doing a particular action, he would repeat it rapidly to see if memory use kept increasing, and if it fell back down to where it was before. He would uncover installation issues, improper dependencies, memory leaks, and all sorts of things in a very short period of time.

Once he was satisfied with those tests he would sound out “F- FUNCTION!” Then he would generate a few ideas of functional tests, and he would examine areas that seemed out of the ordinary and push them. If he found an awkward workflow, he would note it down. If he also noticed memory leaks or CPU usage spikes, he would note what he was doing that caused them. He would continue on through Data, Platforms, Operations and Time, then he would stop testing, consult his notes and start logging bugs.

I’d never seen anything like it, and I knew right then I had to master his approach. But memorizing mnemonics was a bit of a blind spot for me. I needed to figure out how to memorize those mnemonics, so I too could quickly slice through a testing problem (any testing problem) with thorough ease. However, the spectre of King Henry loomed above me, reminding me of my early mnemonic memorization failure.

James peered down at me over the rim of his glasses: “You need to memorize testing mnemonics.”

His tone was casual, but firm. To him it was like telling me I needed to learn to cross the street. Obvious.

I complained about how I often found it hard to memorize mnemonics. Then, much like my sister did back when I was in junior high, James told me not to do it that way anymore.

“That’s because you are memorizing someone else’s. You need to create your own. If they are yours, you’ll remember them.”

He then spent time with me helping me practice memorizing a mnemonic, using the mnemonic to test in real time, and then encouraged me to personalize it and make it my own.

It turned out I could learn new tricks. After all, I had used mnemonics as a student that were quite useful and effective. Who can forget SOHCAHTOA, an effective way to remember how to calculate the sine, cosine and tangent of an angle? I remembered memorizing that mnemonic and using it to great effect. “Soak-a-toe-ah” I repeated in a unique cadence, and imagined a big toe in a wash basin. The imagery and the rhythmic cadence of “soak-a-toe-ah” helped me remember the mnemonic, and practice helped me remember the application. I could memorize testing mnemonics just as easily, and banish King Henry from my memorization nightmares. It turns out that motivation and seeking results can go a long way to memorizing a mnemonic.

I was faithful to James’s advice, and mnemonics have served me well. While I use other people’s mnemonics in my own work, I find I return to my own more often. I’ve added mnemonic memorization and creation to my testing thinking repertoire. I use ones from others that are particularly effective, and I create my own as I need to as well.

When I teach testing techniques, I find, as my father did many years ago, that some thinking models appeal to different people in different ways than others. Mnemonics and other thinking aids don’t need to be lists, they can be imagery, visual patterns, sounds, or anything else that helps you remember to be more thorough. Any thinking model that gets the job done is valid. Furthermore, using different thinking models along with memorized test heuristics helps me analyze them, enhance them, and learn more about my own work. It also means my implementation looks different from someone else’s, based on my own experience, insight and skill. Now, I don’t just use test mnemonics to help plan, I use them during test analysis and planning, and during test execution.

EDIT Update: I SLICED UP FUN is a popular mnemonic I created. I developed it first for myself, then shared it with my team, then with everyone else. It has become quite popular on mobile software development projects the world over.

Are you curious about using mnemonics in your work? If so, imagine me peering over my glasses at you, expecting you to make them your own.

Speeding Up Observation with Models

You just generated four possible ways to solve the problem in about thirty seconds!

One of my brilliant programmer friends had just posed one of those useless interviewing puzzles that Microsoft made famous. We were discussing that the only thing they test is a candidate’s ability to solve those sorts of problems, and really tell you nothing about their ability to do software development tasks. He was frustrated by a recent interviewer who had merely posed a puzzle, wrinkled up their nose at his attempt to solve it in the interview, and had then lectured him on the “right” answer. To illustrate, he had posed the question to me.

Fortunately, it was a type of puzzle that I could easily apply a model against. Also in my favor, I had spent a recent evening teaching a software testing student that very model. I smiled inwardly, and with nonchalance outlined four possible ways of attacking the problem. He stared at me almost dumb-founded for a moment, and then I had to let him off the hook. I laughed as he stared at me in wonder. “There’s a trick to this” I told him. I revealed the model, which is the software testing mnemonic HICCUPPS. I suddenly had a brilliant programmer with a much stronger intellect than mine very interested in software testing. “Teach it to me!” he chortled, with relief and delight.

We both agreed that the “right” answer the interviewer posed was a complete joke, and he was justified in concluding that this was not a company he wanted to work for. That company lost out on hiring one of the best problem-solvers I know.

“Teach me more about these testing mnemonics!” he admonished as we parted ways that day.

* * *

How did you generate so much thorough analysis so quickly? I’ve been here for three weeks, and you’ve been here for three days!

I was doing analysis work with a friend of mine, who like my friend above, is also a brilliant programmer. When I was still in short pants, he was creating artificial intelligence programs in Lisp.

“Part of the reason is that I have your three weeks of work to draw on.” I replied. “Of course, but you have touched on areas I hadn’t thought about at all. I would have thought of them eventually, but how did you think of them so quickly?” he said with a questioning grin. “I use models”, I replied, and brought out a testing problem to demonstrate my thinking. He grasped the object I use in that problem for a second, then looked at me, handed it back and said: “Oh, I see how you do it. Raw observation is very slow. Your models help frame your thinking and speed up observation.” He didn’t even try to solve the problem, and in his characteristic way, rapidly understood a lesson that had taken me much longer to learn.

“Good work” he said with a chuckle as he got up to leave for home. My speech on the wonders of James Bach’s San Francisco Depot (SFDPOT) mnemonic would have to wait for another day.

* * *
Models are useful for generating ideas when solving problems. They aren’t fool-proof though. I am careful to not ignore, but to absorb and test my first instinctive thoughts when solving a problem. Often, that intuition helps, but sometimes I discard it, as careful scrutiny denies the plausibility of the initial thought impulse. Intuition can be both powerful and perilous. There’s nothing wrong with intuition, as long as you can make your conclusions defensible.

It doesn’t hurt to complement intuitive thought with a model, and I find using both helps kick start my thinking processes. Sometimes, the most lucid and effective ideas sprout out of nowhere from a sea of wonder and half-baked thoughts. They frequently pop into my head as I am using a model, and they often turn out to be unrelated to my current train of thought, but incredibly useful. I don’t ignore them merely because I can’t explain them. Instead, I try to adapt my models to increase the chances of them occurring.

Models also add consistency when solving problems. Instead of a formulaic, step-by-step, “one size fits all” approach to solve a problem, they can be used to consistently remind our brains to look into diverse areas for clues and impulses. This is much more powerful than a rote process that may, or as in most cases, may not work. It greatly increases our chances of coming up with a unique solution for a unique problem. I find that after a considerable degree of rumination, I have an “aha” moment, but using a model to complement my natural thinking usually brings me to an “aha – why didn’t I think of that before?!!” thought much more quickly.

Models also help build confidence. If I am posed with a difficult problem, and I have no idea where to start, I can nervously waver and pause, or I can jump in with confidence and use a model. It doesn’t matter if it’s the wrong one, the thinking drives out the fear and uncertainty and leads me in a direction towards solving the problem. Fear steals energy, while models give me a place to start in the face of uncertainty. I constantly learn from experience, and develop new models when I learn a new lesson. Variety of models makes the problem-solving space much richer, and less intimidating.

A good friend of mine is a firefighter, and he also uses mnemonics in his work. His son had recently taken an emergency services technician course, and he was drilling his knowledge one evening. They both have been taught mnemonics to help frame their thinking in emergency situations. In fact, they and others have told me that attempts at scripting this kind of problem solving ends up in farcical situations, or worse, loss of life.

That sobering thought aside, it was a pleasant evening together, and I sat amazed as they rattled off one mnemonic after the other. They even had mnemonics within mnemonics. While one would set the scene for a new problem, the other would rattle off a mnemonic, and explain how they would use it to observe something different to help them get a fuller picture of the problem. They are terrified of having narrow observation within the small window of time they are usually faced with. “But now you notice this in your patient. What do you do now?” From within one letter of one mnemonic, the other would rattle off another mnemonic to use to help jog their observation and thinking to solve that problem, with speed, coherence, and confidence. They would also frequently throw the mnemonics aside altogether, and use other heuristic-based thinking techniques.

They were both in the heat of the discussion, each trying to trip the other up, and the father was attempting to pass on wisdom and experience to his son in a curious Socratic style. I chuckled to myself. I felt if at that instant I were to keel over with some health-related malady, I would be in very capable hands, and someone would learn a new thinking technique in the process.

What’s New?

When I was in university ten years ago, it seemed that Total Quality Management was all the rage in business schools. Terms like employee empowerment, quality control, quality circles, lean manufacturing and statistical process control seemed to be on the lips of the wise.

Nowadays, I am seeing the same ideas that were put forward by people ten or twenty years ago called fancy things like “Agile Software Management.” Some of these recent thoughts would seem at home in Feigenbaum’s Total Quality Management from the 1950s. Some of them were good ideas in the past, and are good ideas now, even if they are rehashed under a new name. We see this kind of behavior repeated over time.

Many early large organizations in North America were railroad companies. They were among the first to discover that large companies faced unique problems. Over one hundred and fifty years ago, Daniel McCallum defined a management structure that was widely adopted and that we are familiar with today. The next time you look at an org chart, feel free to silently thank him.

Prior to the great railroad empires of the 19th century, mercantilism was a dominant economic belief. Adam Smith and his colleagues attacked this model and wrote of a free market system. The system was driven by consumer wants and competition. To survive as a viable business in this system, you needed to be productive. This obsession with productivity is still with us today.

In the early 20th century, Frederick Taylor and the Gilbreths made this into an art form. They employed Time and Motion studies; they lectured and wrote on how workers could increase efficiency, and thereby improve productivity. Today, many decry the work of Taylor and the Gilbreths for various reasons, but we still feel the pressure of efficiency. In fact, not unlike the poor bricklayers, assembly-line workers, or coal shovelers, knowledge-based workers find themselves in a culture obsessed with speed of development and zero defects. Today, we give it names like “Lean” or “Evo”.

As organizations grew larger, thinkers like Henri Fayol came up with systems to deal with the administrative side of management. It emerged as a system of planning, organizing, directing, co-ordinating and controlling. Max Weber was an early thinker who felt that organizations should be socially responsible. It’s interesting that almost one hundred years ago Fayol and Weber expounded on beliefs that we hear echoed today.

Somewhere along the way, management thinkers realized that people do the work, and that no matter what kind of machines we have, or automation we employ, an organization is only as good as its people. Fritz Roethlisberger described the all-too-human Hawthorne Effect. His contemporaries felt that human factors contributed to productivity. Abraham Maslow introduced his hierarchy of needs theory, and now we have volumes of work in this area. Today we are well aware of human factors in organizations. Just talk to someone in Human Resources about their typical day.

Management theory began to embrace innovation as a key component. Over the past twenty years (at least) this has been a major theme. When was the last time you attended a management seminar that didn’t extol the virtues of Toyota, Federal Express and Wal-Mart? Often, the ability to harness technology is trotted out as a factor in the success of those companies. (As a side note, Toyota’s quality scores are carefully examined, but their product recalls are not usually discussed in those venues. Toyota makes a great product, but they aren’t perfect. I have jokingly threatened colleagues that if I hear one more software process wonk drag out Toyota as an example, I will scream. It is an empty threat though – I haven’t yet screamed in a meeting.)

In software development circles, we are faced with some interesting new ideas. Let’s examine the Agile software development movement. The Agile Manifesto was thrust on the scene by respected software thinkers over five years ago. I, like many others, was excited by this statement. Here was an acknowledgment of the humanistic endeavor of software development. It not only made sense but it resonated with my beliefs. Something sounded familiar, though. Agile Manufacturing was an idea that was brought forward in 1995 in a book: “Agile Competitors and Virtual Organizations: Strategies for Enriching the Customer” by Steven L. Goldman, Roger N. Nagel and Kenneth Preiss. According to wikipedia, these are four key attributes in the book:

  1. delivering value to the customer;
  2. being ready for change;
  3. valuing human knowledge and skills;
  4. forming virtual partnerships.

Those sound like good ideas. I don’t know if they inspired the Agile software development movement or not, but even if they did, it doesn’t detract from the points of Agile manifesto. It may just tell us that a lot of people have been struggling for a long time with the complexity of organizations trying to work together towards common goals. Maybe they just came to similar conclusions.
Another new idea on the software development front is adapted from lean manufacturing. The Poppendiecks are at the forefront of this push in the software movement. I like a lot of what Mary and Tom say. It is thoughtful, makes a lot of sense, and they have the experience to back up their claims. However, I get concerned about doing a copy/paste from the manufacturing world and applying it to software development. David Benoit sums this up eloquently:

[I]t still amazes me how many people don’t seem to understand that software development has very little in common with real-world-object development. The creation of software is most like the creation of art. To be good it must be inspired. To be of interest it must be unique. To be worth something it must be different.

As time marches on, I find I am faced with an uncomfortable truth: software development, like any other human organizational effort, is a complex endeavor. There aren’t formulas to guide us that will always work in every situation. And, as much as we have progressed into a modern age, we still struggle with the same things organizations have struggled with for centuries, or millennia. We are still re-inventing the same solutions and are still trying to come up with new ones. Sometimes what is “new” is merely the “old” with a different name. Maybe the author of Ecclesiastes was right when they said: “there is nothing new under the sun.”

Modeling Web Applications

When I am testing web applications, I like to analyze the system to find areas of testability, and to identify potential areas of instability. To help remember areas to investigate, I use this mnemonic:

FP DICTUMM

F – Framework (Struts, Rails, Plone, Spring, .NET, etc.)
P – Persistence (Hibernate, Toplink, iBatis, Active Record, DAOs, etc.)

D – Data (database, flat files, etc.)
I – Interfaces (code level, web services, user, etc.)
C – Communication (HTTP(S), web services, SOAP, XMLHTTPRequest, database drivers, etc.)
T – Technology (.NET, J2EE, Ruby, PHP, Perl etc.)
U – Users (Human end users, and other systems or apps that use this system)
M – Messaging (JMS, Enterprise Message Beans, Web Methods, etc.)
M – Markup (HTML, CSS, DHTML, XML, XHTML)

Each of these areas help inform my testing, or provide areas of testability. Understanding the whole picture from the data model up to the HTML displayed in a web browser is important for my testing.

Who Do We Serve? – Intermittent Problems Revisited

I seeded a problem for readers to spot in my recent post “Who Do We Serve?“. George Dinwiddie was the only reader who not only spotted the problem, he emailed me. George says:

…there’s [a] red flag that occurs early in your story that I think deserves more attention. “Sometimes the tests fail sporadically for no apparent reason, but over time the failures appear less frequently.” As Bob Pease (an analog EE) used to say in Electronic Design and EDN magazines, “If you notice anything funny, record the amount of funny.” When something fails intermittently or for an unknown reason, it’s time to sit up and take notice. The system should not be doing something the designers don’t understand.

George spotted the problem that I introduced, but didn’t expand on, and as I hoped a reader would do, emailed me to point out my omission. Good job George, and thanks for emailing me when you spotted it. Your thoughts are absolutely in line with my experience.

One area of expertise I have developed is the ability to track down repeatable cases for intermittent bugs. These are difficult problems for any team to deal with. They don’t fit the regular model, often requiring off-the-wall experimentation, lots of thinking time, tons of creativity, and a good deal of patience. One needs to look at the potentially enormous amount of data in front of them, and figure out how to weed out the things that aren’t important. There is trial and error involved as well, and a good deal of exploration. Naturally, exploratory testing is a fit when tracking intermittent problems.

On Agile teams, I have found we can be especially prone to ignoring intermittent problems. The first time I discovered an intermittent problem on a team using XP and Scrum, we did exactly as Ron Jeffries has often recommended: we came to a complete stop, and the whole team jumped in to work on the problem. We worked on it for a day straight, but people got tired. We couldn’t track down a repeatable case, and we couldn’t figure out the problem from the code. After the second day of investigation, the team decided to move on. The automated tests were all passing, and running through the manual acceptance tests didn’t reveal the problem. In fact, the only person who regularly saw the problem was me, the pesky tester, who shouldn’t have been on an XP team to begin with.

I objected to letting it go, but the team had several reasons, and they had more experience on Agile projects than I did. Their chief concerns were that tracking down this problem was dragging velocity down. They wanted to get working on the new story cards so we could meet our deadline, or finish early and impress the customer. Their other concern was that we weren’t following the daily XP processes when we were investigating. They said it would be better to get back into the process. Furthermore, the automated tests were passing, and the customer had no problem with the functionality. I stood down, and let it go, but made sure to watch for the problem with extra vigilance, and took it upon myself to find a repeatable case.

Our ScrumMaster talked to me and said I had a valid concern, but the code base was changing so quickly that we had probably fixed the bug already. This was a big red flag to me, but I submitted to the wishes of the team and helped test newly developed stories and tried to give the programmers feedback on their work as quickly as I could. That involved a lot of exploratory testing, and within days, my exploratory testing revealed that the intermittent bug had not gone away. We were at the end of a sprint, and it was almost time for a demo for all the business stakeholders. The team asked me to test the demo, run the acceptance tests, and support them in the demo. We would look at the intermittent bug as soon as I had a repeatable case for it.

Our demo was going well. The business stakeholders were blown away by our progress. They thought it was some sort of trick – it was hard for them to understand that in so short a time we had developed working software that was running in a production-like environment. Then the demo hit a snag. Guess what happened? The intermittent bug appeared during the demo, on the huge projector screen in front of managers, executives, decision makers and end users. It was still early in the project, so no one got upset, but one executive stood up and said: “I knew you guys didn’t have it all together yet.” He chuckled and walked out. The sponsor for the project told us that the problem better be fixed as soon as possible.

I don’t think management was upset about the error, but our pride was hurt. We also knew we had a problem. They were expecting us to fix it, but we didn’t know how. Suddenly the developers went from saying: “That works on my machine.” to: “Jonathan, how can I help you track down the cause?” In the end, I used a GUI automated testing tool as a simulator, and based on behavior I had seen in the past, and patterns I documented from the demo, I came up with a theory, and using exploratory testing, I designed and executed an experiment. Using a GUI test automation tool as a simulator while running manual test scenarios helped me find a cause quite quickly.

My experiment failed in that it showed my theory was wrong, but a happy side effect was that I found the pattern that was causing the bug. It turned out that it was an intermittent bug in the web application framework we were using. The programmers wrote tests for that scenario, wrote code to bypass the bug in the framework, and logged a bug with the framework project. At the next demo, we ran manual tests and automated tests showing what the problem was, and how we had fixed it. The client was pleased, and the team learned a valuable lesson. We learned that an XP team is not immune to general software problems, and as George said we practiced this in the future:

When something fails intermittently or for an unknown reason, it’s time to sit up and take notice.

Other Agile teams I’ve been on have struggled with intermittent errors, particularly teams that had high velocity and productivity. Regular xUnit tools and acceptance tests aren’t necessarily that helpful, unless they are used as part of a larger test experiment. Running them until they turn green, and then checking code in without understanding the problem and fixing it will not cause it to go away. An unfixed intermittent bug sits in the shadows, ticking away like a time bomb, waiting for a customer to find it.

Intermittent problems are difficult to deal with, and there isn’t a formula to follow that guarantees success. I’ve written about them before: Repeating the Unrepeatable Bug, and James Bach has the most thorough set of ideas on dealing with intermittent bugs here: How to Investigate Intermittent Problems. I haven’t seen intermittent bugs just disappear on their own, so as George says, take the time to look into the issue. The system should not be doing something the designers don’t understand.