All posts by jonathankohl

Content Pointer: Automation Politics 101

I wrote an article for Automated Software Testing magazine on test automation politics for the November issue. A PDF copy of the article is available here: Test Automation Politics 101.

Article back story: I had a nice dinner with Dot Graham and Dion Johnson this spring. As we were sharing automation stories, Dion asked if I would be interested in writing an article on politics and test automation. Since most automation writing focuses on tools, results and ideals, there isn’t a lot out there on the social aspects of automation. I thought I’d provide my take on some social challenges I learned about the hard way. I was inspired by Bach’s Test Automation Snake Oil. Bach doesn’t pull punches, and I tried not to either. I’ve often thought that it would have been nice to have been advised on potential controversies when I was starting out in test automation, so we took a handbook approach with the article.

I hope you enjoy it, and if you’re starting out in test automation, you find the article informative. If you’re currently facing any of the resistance I outlined, I hope you find the content encouraging.

Use of Fuzzers Helps Discover XML Security Threats

I posted about fuzzing a few days ago, and I think the tools are neat, and in the hands of good testers can be powerful. They are a nice way to augment existing security testing, to test data transfers or messaging, or to simply generate test data. However, some of my readers basically said:

Big deal. It’s a neat toy for you, or some of your clients who have the time for that sort of thing, but why would I want to use one?

The Register has posted an interesting article where fuzzers were used to discover potential security holes in XML libraries. Check it out here: XML flaws threaten ‘enormous’ array of apps. If my babblings about fuzzing don’t get your attention, maybe this article will. The potential to use fuzzers (in conjunction with other security tools and techniques) to help people catch problems that bad guys will exploit is enormous.

Codemonicon, the company cited in the article, have a lot of interesting information and expertise in this area. They have a nice introduction to fuzzing on their website. This paper by Rauli Kaksonen has a lot of technical detail on fuzzing if you’d like to learn more.

Session-Based Testing Using Instant Messaging

Here’s a scenario: a programmer friend is creating a new web-based product for his company. He wanted someone to augment the existing testing that was already underway, and hired me to add more firepower. It is a distributed environment, which means that everyone in the office telecommutes. We decided to start with me doing an initial session, via instant messaging. We blocked out a time for the session, determined a focus (“mission”) and I started testing.

As I tested, I typed what I was doing in the instant messaging (IM) client, along with questions, concerns, crashes and bug ideas. He followed along through the application as I tested, asked clarifying questions as I discovered problems.

After ten minutes, I had generated enough ideas for that mission. He logged 6 bugs in his fault-tracking system. I had discovered 5, he had discovered 1 himself while following me along and monitoring the server and database.

I’ve done session-based testing over IM quite a lot with different distributed projects, like open source projects such as Watir and Session Tester, and with some of my clients that use telecommuting, or are completely virtual. We’ve used and adapted some of the ideas from SBTM to help provide more structure around the actual test sessions.

Using session-based testing with IM seems to work best in pairs. Once a focus and rough timebox are determined, test setup is out of the way, both pairs initiate a session through an IM client. One drives by testing, and the other saves the conversation locally, and creates notes, logs bugs, investigates underlying behavior, etc.

This style is natural (we communicate a lot with IM clients anyway) and doesn’t interrupt the natural testing flow if one person acts more as the primary test idea generator/executor and the other acts as the primary scribe. If you want to add some variation to your testing approach, give it a try.

New Exploratory Testing Tutorials

I’ve developed some new ET tutorials. The newest is “Managing Exploratory Testing”, addressing questions I hear the most from managers. Since I’ve done a lot of this sort of training already, it made sense to start offering this publicly. I’ll be teaching it at EuroSTAR and STAR West this fall.

The other tutorial is an evolved version of my take on “Exploratory Testing Explained”. Due to questions and interests of people who have taken the course, it has evolved into a hands-on, experiential workshop: “Exploratory Testing Interactive”. I’m also teaching it at STAR West.

If you’d like to work with me to get a glimpse into my take on exploratory testing, and learn some new skills, drop in at one of the conferences. If you don’t or can’t attend these conferences, you can also consider bringing me in to work with you and your team one-on-one.

Fuzzing Through the Side Door

I’ve been looking into testing with fuzzers lately, and finally got the chance to do this on a live project. While there are a good deal of black-box fuzzing tools out there, if you want to go beyond that you are often on your own. At the other end of the spectrum, MSDN has a nice article on white box fuzzing.

What I needed to do was somewhere in the middle. I needed to test a data transport layer in a large, complicated system, and fuzz the data that was sent in messages from internal systems as well as from 3rd parties. This required some knowledge of the code internals, the protocol, and messaging APIs used, the architecture, etc. Some people call this gray box testing, and I like to call it “testing through the side door”, a term I got from Jennitta Andrea. (The front door is the UI, the back door is the database, the side door are other testable interfaces in-between like messaging APIs, etc.)

I couldn’t find any tools (open source or proprietary) that did what I needed, so I started a search for a Java fuzzing library. (I was doing the work in a Java shop, so Java-based tools were my first choice.) I couldn’t find anything, so I contacted the people on the JBroFuzz project and told them what I wanted to do. I like OWASP, and I liked what I saw in the JBroFuzz product. Thankfully, it is pretty easy to use JBroFuzz as a library, in addition to the already-powerful HTTP(S) fuzzing the tool provides through its UI.

Yiannis Pavlosoglou bent over backwards to help me. Yiannis sent me examples to try out, explained how to create my own fuzzers, and even helped me fix a silly iterator error I had in my own code. Thanks Yiannis!

In relatively short order, I integrated JBroFuzz into the existing test framework for messaging, and the first afternoon I had it running, it helped us uncover problems.
If you’re looking for a fuzzing library for Java, here are directions for using JBroFuzz as a fuzzing library.

Thanks to Yiannis Pavlosoglou for all the help, and the JBroFuzz team for a great tool. If you haven’t started looking at fuzzers and fault injection tools on your project, give them a try. Happy fuzzing.

I’m Finding Bugs, But It’s Pissing People Off. What Am I Doing Wrong?

A rookie tester asked me this question. They are experienced in software development, and their grad school work was in a highly specialized area of mathematics. They have been hired to do a very particular type of testing that they are uniquely qualified for. After getting over the mindset shift required to be an effective tester, they were pleased that they were finding bugs that they’d been hired to find. Then a wrinkle appeared. The very people who hired them to find the bugs got angry when they found the bugs they were being paid to find. Sound strange? It might, but it’s actually a common response to effective testing.
Here are some of my clarifying questions:

  • Are the bugs you are finding important, or trivial? Stakeholders on teams can get irritated if you inundate them with only trivial bugs.
  • How are your bug reports? Are they thorough? Do you provide enough information for devs to accurately reproduce the bugs?Programmers get irritated with bug reports they can’t reproduce and track down.
  • How’s your attitude? Do you laugh with glee at the misfortune of the dev team? Are you empathetic, or a condescending jerk? Is your language accusatory. blaming or condescending in any way?No one wants to be around someone who enjoys the schadenfreude.

No, the devs like my bug reports, and I’ve been as low-key and empathetic as possible. These are serious bugs that have probably been in the application since it was released several years ago. I’m tempted to stop logging the kinds of bugs that cause them to get mad at me.

Ok, you’re probably not doing anything wrong. In fact, you are probably doing something right. Don’t stop! You aren’t failing in your work, you are succeeding.

Since they seem to be doing some of the right things and aren’t knowingly or obviously antagonizing stakeholders, I assume that they really are finding important bugs, and that reaction is one telltale sign that they are being effective as a tester. I also encouraged them not to give up, but to do as Peter Block would recommend: move towards the resistance. That is a heuristic that tells us we are doing our jobs, and highlighting the really important problems. The resistance to being confronted with difficult problems that aren’t trivial to solve is just human nature.

As testers, if we are finding good problems, and we’re working with the team to help them, moving away from the resistance is a sure-fire way to relieve the pressure in the short-term, but lose credibility in the long term. If stakeholders realize we can’t be trusted, we’ve lost out ability to effectively test, observe information, and provide information that is useful for them.

I have a personal rule: if I am being pressured not to talk about something, or I feel like I should avoid a contentious issue, that means I absolutely must deal with it and talk about it. I try to be empathetic, understanding, use the right kind of language and not be a jerk, but I bring it up. Every time I do, something important is brought to the attention of the people who need to hear it. They may not like it, and they may not like to see me coming, but they know that I will always tell them the truth.
Early in my career, I was told not to bring up issues that were met with resistance. I was told that was a “career-limiting move.” My career is a testament to the opposite. Whenever I have faced resistance, stuck to my integrity and ethics, and talked about the hard problems the team had been taught to ignore, it has been a career-catapulting move.

So testers, if your work is pissing people off, the problems you are observing and reporting are important, and you aren’t being a jerk, don’t give up. It may hurt to point them out in the short-term, but it pays off handsomely in the long-term.

*Note: This doesn’t just apply to testing work, but any type of work that requires pointing out and helping solve problems. Whatever it is you do, don’t be discouraged if you are working on real problems and people are behaving strangely. Use that resistance to tell you that you are doing something right. The worst thing to do is give up and become silent.

Descriptive and Prescriptive Testing

While many of us drone on about scripted testing vs. exploratory testing, the reality is on real projects we tend to execute testing with a blend of both. It often feels lop-sided – on many projects, scripted testing is the norm, and exploratory testing isn’t acknowledged or supported. On others, the opposite can be true. I’ll leave the debate on this topic up to others – I don’t care what you do on your projects to create value. I would encourage you to try some sort of blend, particularly if you are curious about trying exploratory testing. However, I’m more interested in the styles and why some people are attracted to one side of the debate or the other.

Recently, David Hussman and I have been collaborating, and he pointed out the difference between “prescriptive” and “descriptive” team activities. A prescriptive style is a preference towards direction (“do this, do that”) while a descriptive style is more reflective (“this is what we did”). Both involve desired outcomes or goals, but one attempts to plan the path to the outcome in more detail in advance, and the other relies on trying to reach the goals with the tools you have at hand, reflecting on what you did and identifying gaps, improving as you go, and moving towards that end goal.

With a descriptive style of test execution, you try to reach a goal using lightweight test guidance. You have a focus and more coarse-grained support for it than what scripted testing provides. (The guidance is there, it just isn’t as explicit.) As you test, and when you report testing, you describe things like coverage, what you discovered, bugs, and your impressions and feelings. With a prescriptive style of testing, you are directed by test plans and test cases for testing guidance, and follow a more direct process of test execution.

Scripted testing is more prescriptive (in general) and exploratory testing is more descriptive (in general.) The interesting thing is that both styles work. There are merits and drawbacks of both. However, I have a strong bias towards a descriptive style. I tend to prefer an exploratory testing approach, and I can implement this with a great deal of structure, traceability utilizing different testing techniques and styles. I prefer the results the teams I work with get when they use a more descriptive style, but there are others who have credible claims that they prefer to do the opposite. I have to respect that there are different ways of solving the testing problem, and if what you’re doing works for you and your team, that’s great.

I’ve been thinking about personality styles and who might be more attracted to different test execution styles. For example, I helped a friend out with a testing project a few weeks ago. They directed me to a test plan and classic scripted test cases. Since I’ve spent a good deal of time on Agile teams over the past almost decade, I haven’t been around a lot of scripted tests for my own test execution. Usually we use coverage outlines, feature maps, checklists, and other sources of information that are more lightweight to guide our testing. It took me back to the early days of my career and it was kind of fun to try something else for a while.

Within an hour or two of following test cases, I got worried about my mental state and energy levels. I stopped thinking and engaging actively with the application and I felt bored. I just wanted to hurry up and get through the scripted tests I’d signed on to execute and move on. I wanted to use the scripted test cases as lightweight guidance or test ideas to explore the application in far greater detail than what was described in the test cases. I got impatient and I had to work hard to keep my concentration levels up to do adequate testing. I finally wrapped up later that day, found a couple of problems, and emailed my friend my report.

The next day, mission fulfilled, I changed gears and used an exploratory testing approach. I created a coverage outline and used the test cases as a source of information to refer to if I got stuck. I also asked for the user manual and release notes. I did a small risk assessment and planned out different testing techniques that might be useful. I grabbed my favorite automated web testing tool and created some test fixtures with it so I could run through hundreds of tests using random data very quickly. That afternoon, I used my lightweight coverage to help guide my testing and found and recorded much more rich information, more bugs, and I had a lot of questions about vague requirements and inconsistencies in the application.

What was different? The test guidance I used had more sources of information, and models of coverage, and it wasn’t an impediment to my thinking about testing. It put the focus on my test execution, and I used tools to help do more, better, faster test execution to get as much information as I could, in a style that helps me implement my mission as a tester. I had a regression test coverage outline to repeat what needed to be repeated, I had other outlines and maps that related to requirements, features, user goals,etc. that helped direct my inquisitive mind, and helped me be more consistent and thorough. I used tools to support my ideas and to help me extend my reach rather than try to get them to repeat what I had done. I spent more time executing tests, and many different kinds of tests using different techniques than managing the test cases, and the results reflected that.

My friend was a lot happier with my work product from day 2 (using a descriptive style) than on day 1 (using a prescriptive style). Of course, some of my prescriptive friends could rightly argue that it was my interpretation and approach that were different than theirs. But, I’m a humanist on software projects and I want to know why that happens. Why do I feel trapped and bored with much scripted testing while they feel fearful doing more exploratory testing? We tend to strike a balance somewhere in the middle on our projects, and play to the strengths and interests of the individuals anyway.

So what happened with my testing? Part of me thinks that the descriptive style is superior. However, I realize that it is better for me – it suits my personality. I had a lot of fun and used a lot of different skills to find important bugs quickly. I wasn’t doing parlor trick exploratory testing and finding superficial bugs – I had a systematic, thorough traceable approach. More importantly for me, I enjoyed it thoroughly. Even more importantly, my friend, the stakeholder on the project who needed me to discover information they could use, was much happier with what I delivered on day 2 than on day 1.

I know other testers who aren’t comfortable working the way I did. If I attack scripted testing, they feel personally attacked, and I think that’s because the process suits their personality. Rather than debate, I prefer we work using different tools and techniques and approaches and let our results do the talking. Often, I learn something from my scripting counterpart, and they learn something from me. This fusion of ideas helps us all improve.

That realization started off my thinking in a different direction. Not in one of those “scripted testing == bad, exploratory testing == good” debates, but I wondered about testing styles and personality and what effect we might have when we encourage a style and ignore or vilify another. Some of that effect might be to drive off a certain personality type who looks at problems differently and has a different skill set.

In testing, there are often complaints about not being able to attract skilled people, or losing skilled people to other roles such as programming or marketing or technical writing. Why do we have trouble attracting and keeping skilled people in testing. Well, there are a lot of reasons, but might one be that we discourage a certain kind of personality type and related skill set by discouraging descriptive testing styles like exploratory testing? Also, on some of our zealous ET or Agile teams, are we also marginalizing worthwhile people who are more suited to a prescriptive style of working?

We also see this in testing tools. Most are geared towards one style of testing, a prescriptive model. I’m trying to help get the ball rolling on the descriptive side with the Session Tester project. There are others in this space as well, and I imagine we will see this grow.

There has to be more out there testing-style-wise other than exploratory testing and scripted testing, and manual vs. automated testing. I personally witness a lot of blends, and encourage blends of all of the above. I wonder if part of the problem with the image of testing and our problem attracting talented people is in how we insist testing must be approached. I try to look at using all the types of testing we can use on projects to discover important information and create value. Once we find the right balance, we need to monitor and change it over time to adjust to dynamics of projects. I don’t understand the inflexibility we often display towards different testing ideas. How will we know if we don’t try?

What’s wrong with embracing different styles and creating a testing mashup on our teams? Why does it have to be one way or the other? Also, what other styles of testing other than exploratory approaches are descriptive? What other prescriptive styles other than scripted testing (test plan, test case driven) are there? I have some ideas, but email me If you’d like to see your thoughts appear in this blog.

Announcing Session Tester

UPDATE: Unfortunately, this is now a dead project. While we had amazing initial community support, the project waned after an economic downturn. We were unable to staff up a team to carry it forward. If you would like to take up the torch and start the project up again, contact me for the source code.

We’re pleased to announce a new open source, lightweight session-based testing management tool called Session Tester. Aaron West and I have been working on this project off and on for quite a while, and finally have a public beta release. We expect the beta release will be a little rough around the edges, but we’re pleased to have created something that we hope will help make session-based testing more accessible. We are following the open source model of “release early, release often” and value your feedback. Your feedback will help shape the tool as it emerges from Beta. Bug reports and usability issues are very welcome.

It’s early days yet and the tool is quite simple. User feedback will influence what features are added to the finished product(s). That’s the beauty of the open source product development process, where the evolution of the product is transparent and guided by the user community.

Session Tester was written in Java, so it should run on any operating system that has a recent version of the Java Runtime Environment. We have tested with Windows XP, Vista, Mac OS X (Intel) and Linux.

We developed this tool as an aid for testers who are using session based testing management. (Our initial interpretation of SBTM outlined in the article is more lightweight than some might be used to.) It provides an easy way to time your sessions, remind you of your testing missions, and to record your notes. It also provides an idea primer in case you get stuck, and a testing cheatsheet to review while you are in the thick of testing and would like some ideas.

Even testers who aren’t in regulated environments and don’t take detailed notes find the tool helps them add structure to their thinking when exploratory testing.

Getting Started

Download Session Tester 0.1 here. This is a beta release, so we don’t have proper installers yet. Be sure to have a recent JRE installed (at least JRE 6, update 6), or else Session Tester might not run properly.

If you are unsure about your JRE, go to the Sun site for a new version. We last used update 11, but anything recent should work: Java SE Downloads. There are also instructions on installation and usage in the user guide. Once you have a recent JRE, just unzip into a folder, double click the executable jar file, and start testing.

This is a beta release, so we expect bug reports and usability issues, as well as feature requests. Feel free to share any of those ideas on our forum. Please contribute to the community there. We will also be looking for contributors to the project itself.

Current Features

  • a timer to keep track of your session length
  • note taking that is automatically saved and formatted into XML, with extra tags for more organization
  • optional reminders
  • mission reminder so you don’t lose focus
  • session end reminder so you can start winding your session down
  • an idea primer for those times when you get stuck when trying to generate test ideas (thanks to Michael Bolton for introducing us to oblique strategies and contributing to the current list)
  • an exploratory testing cheatsheet, supplied by Elisabeth Hendrickson (check under the Help menu)

Currently, we are building the tool for individual testers to use to take notes, manage their sessions, prime ideas, and review their session files in XML format. Eventually, we will extend this to team management.

Project Goals

Our goal with this tool is to help train testers on how to do session-based testing, and make the practice easier for everyone. We also thought hard about the idea generation part of testing, and try to use the interruptions for note taking as opportunities idea generating or focusing tools, instead of just annoyances.

Thanks

Thanks to Patrick Lightbody for generously providing the space and for supporting this project, as well as to James and Jon Bach for creating SBTM, Antony Marcano for the inspiration and useful feedback, and to Jared Quinert and Mike Kelly for their usability feedback. Thanks to Michael Bolton for the inspiration for the idea primer, and for contributing test phrases to it, and thanks to Elisabeth Hendrickson for granting us permission to provide her cheatsheet as a thinking reference for testers who are using the tool.

Questions?

Sign up and post questions, concerns, comments, queries, etc. on our clearspace discussion forum.