I recently wrote an article on Exploratory Testing and how it fits in the agile world for InformIT. Read Exploratory Testing on Agile Teams here.
Category Archives: management
Testing Debt
When I’m working on an agile project, (or any process using an iterative lifecycle), an interesting phenomenon occurs. I’ve been struggling to come up with a name for it, and conversations with Colin Kershaw have helped me settle on “testing debt”. (Note: Johanna Rothman has touched on this before, she considers it to be part of technical debt.) Here’s how it works:
- in iteration one, we test all the stories as they are developed, and are in synch with development
- in iteration two, we remain in synch testing stories, but when we integrate what has been developed in iteration one with the new code, we now have more to test than just the stories developed in that iteration
- in iteration three, we have the stories to test in that iteration, plus the integration of the features developed in iterations that came before
As you can see, integration testing piles up. Eventually, we have so much integration testing to do as well as story testing, we have to sacrifice one or the other because we are running out of time. To end the iteration (often two to four weeks in length) some sort of testing needs to be cut in this iteration to be looked at later. I prefer keeping in synch with development, so I consciously incur “integration testing debt”, and we schedule time at the end of development to test a completed system.
Colin and I talked about this, and we explored other kinds of testing we could be doing. Once we had a sufficiently large list of testing (unit testing, “ility” testing, etc.), it became clear that the “testing debt” was more appropriate than “integration testing debt”.
Why do we want to test that much? As I’ve noted before, we can do testing in three broad contexts: the code context (addressed through TDD), the system context and the social context. The social context is usually the domain of conventional software testers, and tends to rely on testing through a user interface. At this level, the application becomes much more complex, greater than the sum of its parts. As a result, we have a lot of opportunity for testing techniques to satisfy coverage. We can get pretty good coverage at the code level, but we end up with more test possibilities as we move towards the user interface.
I’m not talking about what is frequently called “iteration slop” or “trailer-hitched QA” here. Those occur when development is done, and testing starts at the end of an iteration. The separate QA department or testing group then takes the product and deems it worthy of passing the iteration after they have done their testing in isolation. This is really still doing development and testing in silos, but within an iterative lifecycle.
I’m talking about doing the following within an iteration, alongside development:
- work as a sounding board with development on emerging designs
- help generate test ideas prior to story development (generative TDD)
- help generate test ideas during story development (elaborative TDD)
- provide initial feedback on a story under development
- test a story that has completed development
- integration test the product developed to date
Of note, when we are testing alongside development, we can actually engage in more testing activities than when working in phases (or in a “testing” phase near the end). We are able to complete more testing, but that can require that we use more testers to still meet our timelines. As we incur more testing debt throughout a project, we have some options for dealing with it. One is to leave off story testing in favour of integration testing. I don’t really like this option; I prefer keeping the feedback loop as tight as we can on what is being developed now. Another is to schedule a testing phase at the end of the development cycle to do all the integration, “ility”, system testing etc. Again I find this can cause a huge lag in the feedback loop.
I prefer a trade-off. We have as tight a feedback loop on testing stories that are being developed so we stay in synch with the developers. We do as much integration, system, “ility” testing as we can in each iteration, but when we are running out of time, we incur some testing debt in these areas. As the product is developed more (and there is now much more potential for testing), we bring in more testers to help address the testing debt, and bring on the maximum number we can near the end. We schedule a testing iteration at the end to catch up on the testing debt that we determine will help us mitigate project risk.
There are several kinds of testing debt we can incur:
- integration testing
- system testing
- security testing
- usability testing
- performance testing
- some unit testing
And the list goes on.
This idea is very much a work-in-progress. Colin and I have both noticed that on the development side, we are also incurring testing debt. Testing is an area with enormous potential, as Cem Kaner has pointed out in “The Impossibility of Complete Testing” (Presentation) (Article).
Much like technical debt we can incur it unknowingly. Unlike refactoring, I don’t know of a way to repay this other than to strategically add more testers, and to schedule time to pay it back when we are dealing with contexts other than program code. Even in the code context, we still may incur testing debt that refactoring doesn’t completely pay down.
How have you dealt with testing debt? Did you realize you were incurring this debt, and if so, how did you deal with it? Please drop me a line and share your ideas.
Test Automation is Software Development
This is a concept I can’t stress enough: test automation is software development. There really is no getting around it. Even if we use a record/playback testing tool, some sort of code is generated behind the scenes. This is nothing new as people like James Bach and Bret Pettichord have reminded us for years. Attempts to automate software development have been around for a while. Here’s a quote that Daniel Gackle sent to me in “Facts and Fallacies of Software Engineering” by Robert Glass:
Through the years, a controversy has raged about whether software work is trivial and can be automated, or whether it is in fact the most complex task ever undertaken by humanity. In the trivial/automated camp are noted authors of books like “Programming without Programmers” and “CASE — The Automation of Software” and researchers who have attempted or claim to have achieved the automation of the generation of code from specification. In the “most complex” camp are noted software engineers like Fred Brooks and David Parnas.
Software testing is also a non-trivial, complex task. Dan Gackle commented on why what Glass calls the “trivial/automated camp” still has such currency in the testing world, and has less support in the development world:
It’s a lot easier to deceive yourself into buying test automation than programming automation because test automation can be seen to produce some results (bad results though they may be), whereas attempts to automate the act of programming are a patently laughable fiasco.”
I agree with Dan, and take this one step further: attempting to automate the act of software testing is also a fiasco. (It would be laughable if it weren’t for all the damage it has caused the testing world.) It just doesn’t get noticed as quickly.
If we want to automate a task such as testing, first of all, we need to ask the question: “What is software testing?” Once we know what it is, we are now ready to ask the question: “Can we automate software testing?”
Here is a definition I’m comfortable with of software testing activities (I got this from James Bach):
- Assessing product risks
- Engaging in testing activities
- Asking questions of the product to evaluate it. We do this by gathering information using testing techniques and tools.
- Using a mechanism by which we can recognize a problem (an oracle)
- Being governed by a notion of test coverage
What we call “test automation” really falls under the tools and techniques section. It does not encapsulate software testing. “Test automation” is a valuable tool we can use in our tester’s toolbox to help us do more effective testing. It does not and can not replace a human tester, particularly at the end-user level. It is a sharp tool though, and we can easily cut ourselves with it. Most test automation efforts fail because they don’t take software development architecture into account, they don’t plan for maintenance, and they tend to be understaffed, and are often staffed by non-programmers.
Test automation efforts suffer from poor architecture, bugs (which can cause false positives in test results), high maintenance costs, and ultimately unhappy customers. Sound familiar? Regular software development suffers from these problems as well, but we get faster and louder feedback from paying customers when we get it wrong in a product. When we get it wrong in test automation, it is more insidious; it may take a long time to realize a problem is there. By that time, it might be too late. Customers are quietly moving on to competitors, talented testers are frustrated and leaving your company to work for others. The list goes on.
This attitude of a silver bullet solution to our problems of “test automation” contributes to the false reputation of testing as a trivial task, and testers are blamed for the ultimate poor results. “Our testers didn’t do their jobs. We had this expensive tool that came with such great recommendations, but our testers couldn’t get it to work properly. If we can hire an expert in “Test Company X’s Capture/Replay Tool”, we’ll be fine.” So instead of facing up to the fact that test automation is a very difficult task that requires skill, resources, good people, design, etc. we hire one guy to do it all with our magic tool. And the vicious circle continues.
The root of the problem is that we have trivialized the skill in software testing, and we should have hired skilled testers to begin with. When we trivialize the skill, we are now open to the great claims of snake-oil salesmen who promise the world, and underdeliver. Once we have sunk a lot of money into a tool that doesn’t meet our needs, will we admit it publicly? (In many cases, the test tool vendors forbid you from doing this anyway in their license agreements. One vendor forbids you from talking at all about their product when you buy it.)
In fact, I believe so strongly that “test automation” is not software testing, I agree with Cem Kaner that “test automation” is in most contexts (particularly when applied to a user interface) a complete misnomer. I prefer the more correct term “Computer Assisted Testing”. Until computers are intelligent, we can’t automate testing, we can only automate some tasks that are related to testing. The inquiry, analysis, testing skill etc. is not something a machine can do. Cem Kaner has written at length about this in: Architectures of Test Automation. In software develpment, we benefit greatly from the automation of many tasks that are related to, but not directly attempting to automate software development itself. The same is true of testing. Testing is a skilled activity.
Anyone who claims they can do software test automation without programming is either very naive themselves, or they think you are naive and are trying to sell you something.
Remuneration and Punished by Rewards
Lately I’ve been reading Punished by Rewards by Alfie Kohn. He shows how incentives can work to get people to do things in the short-run, but ultimately fail. I frequently get asked how to measure tester’s performance, and I am often at odds with people looking for hard numbers to measure and reward testers by. “Without numbers on _____, how can I do a performance evaluation? How can I use “pay for performance” unless I have hard numbers? I need to measure testers to motivate them.” Alfie Kohn explains why these kinds of practices can backfire better than I can. If you are a manager, teacher, or leader of any sort with people reporting to you, I urge you to read this book. (If you aren’t in a position of leadership, feel free to read it too.) Even if you disagree with it, at least it will provide a different perspective.
Questions about fees come up for anyone who charges for their services, and I was pleased that Alfie Kohn dealt with this issue in chapter 10 of Punished by Rewards. On p. 183:
When someone contacts me about giving a lecture or writing an article, I ask how much money is involved and often negotiate for the maximum amount that seems to be fair and that the organization can afford to pay. Then, assuming we have come to an agreement, I do my best to not think about the money again. I do this because I fear the consequences of construing what I am doing in terms of what I am being paid: eventually I might find myself thinking, “A is paying me twice as much as B, so I’d better do twice as good a job for A.” If I ever reach that point, my integrity will be gone, and my intrinsic motivation will have fled along with it.
Kohn goes on to explain that if we don’t decouple the task from the compensation, we run the risk of being too focused on what we get at the end at the expense of what we are supposed to be working on right now. That “intrinsic motivation” is why we do the work we do, and why we do it well. If we sacrifice it by not being ethical, or allowing money to take precedence, our work will suffer.
Money, status, fear, or “gold stars” are not good motivators. Kohn answers the question about motivation on p. 181 of Punished by Rewards:
…it is possible to get people to do something. That is what rewards, punishments and other instruments of control are all about. But the desire to do something, much less to do it well, simply cannot be imposed; in this sense, it is a mistake to talk about motivating other people. All we can do is set up certain conditions that will maximize the probability of their developing an interest in what they are doing and remove the conditions that function as constraints.
On p. 182, Kohn echos W. Edwards Deming: “Pay is not a motivator.” Kohn says this about pay:
Pay people generously and equitably. Do your best to make sure they don’t feel exploited. Then do everything in your power to help them put money out of their minds.
This might sound foreign at first, but think about how groups of people who do extraordinary things are motivated. Look at Open Source software where legions of talented people work hard for no pay. Look at religious and political organizations where people believe in a cause, and are willing to commit to it and share their talents and time. Pay isn’t a motivator for people who believe in something they feel is important. Their intrinsic motivation is powerful, and when they become disillusioned, they drift. Often, when people lose track of the purpose, it is replaced by the appearance of purpose, which can be dangerous.
Be careful what you measure, and what you reward based on those measurements. If that intrinsic motivation to do work is clouded by the rewards, we will get results that are surprising, and not necessarily in line with our goals as leaders. In fact, we may eventually frustrate our overall goals as they are superseded by measurement and rewards at the micro level. On p. 183, Kohn says:
Providing feedback that employees can use to do a better job ought never to be confused or combined with controlling them by offering (or withholding) rewards.
He goes on to explain how this kind of action can have disastrous results. Those of us who lead testers need to be sure that the way we measure and reward does not distort or remove the tester’s intrinsic motivation for testing. If the tester has no intrinsic motivation anymore, we run the risk of not getting the kind of work completed that the company hopes for.
The Role of a Tester on Agile Projects
I’ve stopped using this phrase: “The role of a tester on agile projects”. There has been endless debate over whether there should be dedicated testers on agile projects. In the end, endless debate doesn’t appeal to me. I’ve been on enough agile teams now to see that testers can add value in pretty much the same way they always have. They provide a service to a team and customers that is information-based. As James Bach says: “testing lights the way.” Programmer testing and tester testing complement each other, and agile projects can provide an ideal environment for an amazing amount of collaboration and testing. It can be hard to break in to some agile projects though when many agilists seem to view conventional testing with indifference or in some cases with disdain. (Bad experiences with QA Police doesn’t help, so the testing community bears some responsibility with this poor view of testing.)
What I find interesting is hearing experiences from people on Agile projects who fit in and contributed on Agile teams. I like to hear stories about how a Tester worked on an XP team, or how a Business Analyst fit in and thrived on an Agile team. One of the most fascinating stories I’ve encountered was a technical writer who worked on an XP team. It was amazing to find out how they adapted and filled a “team member role” to help get things done. It’s even more fascinating to find how the roles of different team members change over time, and how different people learn new skills and roll up their sleeves to get things done. There is a lot of knowledge in areas such as user experience work, testing, technical writing and others that Agile team members can learn from. In turn, they can learn a tremendous amount from Agilists. This collaborative learning helps move teams forward and can really push a team’s knowledge envelope. When people share successes and failures of what they have tried, we all benefit.
I like to hear of people who work on a team in spite of being told “dedicated testers aren’t in the white book”, or “our software doesn’t need documentation”. I’m amazed at how adaptable smart, talented people are who are blazing trails and adding value on teams that have challenging and different constraints. A lot of the “we don’t need your role” discussion sounds like the same old arguments testers, technical writers, user experience folks and others have been hearing already for years from non-Agilists. Interestingly enough, those who work on agile projects often report that in spite of initial resistance, they manage to fit in and thrive once they adapt. Those who were their biggest opponents at the beginning of a project often become their biggest supporters. This says to me that there are smart, capable people from a lot of different backgrounds who can offer something to any team they are a part of.
The Agile Manifesto says: “we value: Individuals and interactions over processes and tools.” Why then is there so much debate over the “tester role”? Many Agile pundits dismiss having dedicated testers on Agile projects. I often hear: “There is no dedicated tester role needed on an Agile team”. Isn’t this notion of a “role” to be excluded from a team putting a process over people? If someone joins an Agile team who is not a developer and they believe in the values of the methodology and want to work with a great team, do we turn them away? Shouldn’t we embrace the people even if the particular process we are following does not spell out their duties?
I would like to see people from non-developer roles be encouraged to try working on agile teams and share successes and failures so we can all benefit. I still believe software development has a long way to go, and we should try to improve every process. A danger of codifying and spreading a process is that it doesn’t have all the answers for every team. When we don’t have all the answers, we need to look at the motivations and values behind a process. For example, what attracted me most to XP were the values. My view on software processes is that we should embrace the values, and use them as a base to strive towards constant improvement. That means we experiment and push ideas forward, and fight apathy, hubris and as Deming said, drive out fear.
Tester as Editor Metaphor
Sometimes when I’m asked to explain how I see the role of testers on development teams, I use an (admittedly simplified) editor role as an analogy. For example, I might be asked to write an article on a particular topic, say test automation, and the publisher asks me to touch on three areas: a test strategy, a list of test tools and an example. These three requirements serve as the acceptance tests. I will write several drafts on my own, and with each draft I will remove spelling errors and grammatical mistakes. These are my unit tests. I move iteratively with article drafts until I feel I have expressed the ideas that are needed for the article. Once my unit tests have passed, and I’m confident I’ve have touched on the three areas I’ve been asked to write about (my acceptance tests seem to pass), I send the article to an editor for feedback.
The editor checks the content. If I haven’t met the obligation of the acceptance tests, they will let me know quickly that I missed a requirement. Good editors point out areas where I haven’t expressed an idea as clearly as I could. Good editors really know the audience, and point out areas that the audience may have problems with. They may also point out spelling mistakes and grammatical errors I have missed. They help draw out ideas and help me make the article all it could be given the time and resources available.
The editor doesn’t merely point out flaws, they can also provide suggestions to help overcome them. A cluster of writing errors may indicate to an author that an idea is malformed. Editors can spot problems that the author might miss because they are a skilled “second set of eyes”. They can provide constructive criticism and help encourage the author prior to publishing. It isn’t necessary for all articles to have a formal editor, but when I work with a good editor I realize how much better the article is than if I did it on my own.
In many ways a software program is also an expression of an idea. An idea may be technically correct and may meet the requirements of the customer, but may not be expressed clearly given the intended audience. My role of a tester is not an adversarial one, but like the editor my goal is to encourage and help make the program all it could be prior to its release. Like a good editor, a tester has an ability to know the audience of the program and the context in which it will operate.
In this post, Brian Marick talks about writer’s workshops which look like a good idea for doing software testing:
I’m drawn to the analogy of writers’ workshops as they are used in the patterns community. Considerable care is taken to prepare the author to be receptive to critical comments. At its worst, that leads to nicey-nice self-censorship of criticism. But at its best, it allows the author to really hear and reflect on critical ideas, freed of the need to be defensive, in a format where teaching what should change doesn’t overwhelm teaching what works and should stay the same.
Software development teams could learn a lot about constructive criticism from the good writer’s workshops.
In another post, Brian sums up the idea I’m trying to get across here:
… I offer this picture of an Agile team. They – programmers, testers, business experts – are in the business of protecting and nurturing the growing work until it’s ready to face the world.
Good testers are not the adversaries of developers, they are part of the team who works collectively towards creating the best software they can. Bad testing, like bad editing does not seem to be in the business of nurturing a growing work.