All posts by jonathankohl

Testing Debt

When I’m working on an agile project, (or any process using an iterative lifecycle), an interesting phenomenon occurs. I’ve been struggling to come up with a name for it, and conversations with Colin Kershaw have helped me settle on “testing debt”. (Note: Johanna Rothman has touched on this before, she considers it to be part of technical debt.) Here’s how it works:

  • in iteration one, we test all the stories as they are developed, and are in synch with development
  • in iteration two, we remain in synch testing stories, but when we integrate what has been developed in iteration one with the new code, we now have more to test than just the stories developed in that iteration
  • in iteration three, we have the stories to test in that iteration, plus the integration of the features developed in iterations that came before

As you can see, integration testing piles up. Eventually, we have so much integration testing to do as well as story testing, we have to sacrifice one or the other because we are running out of time. To end the iteration (often two to four weeks in length) some sort of testing needs to be cut in this iteration to be looked at later. I prefer keeping in synch with development, so I consciously incur “integration testing debt”, and we schedule time at the end of development to test a completed system.

Colin and I talked about this, and we explored other kinds of testing we could be doing. Once we had a sufficiently large list of testing (unit testing, “ility” testing, etc.), it became clear that the “testing debt” was more appropriate than “integration testing debt”.

Why do we want to test that much? As I’ve noted before, we can do testing in three broad contexts: the code context (addressed through TDD), the system context and the social context. The social context is usually the domain of conventional software testers, and tends to rely on testing through a user interface. At this level, the application becomes much more complex, greater than the sum of its parts. As a result, we have a lot of opportunity for testing techniques to satisfy coverage. We can get pretty good coverage at the code level, but we end up with more test possibilities as we move towards the user interface.

I’m not talking about what is frequently called “iteration slop” or “trailer-hitched QA” here. Those occur when development is done, and testing starts at the end of an iteration. The separate QA department or testing group then takes the product and deems it worthy of passing the iteration after they have done their testing in isolation. This is really still doing development and testing in silos, but within an iterative lifecycle.

I’m talking about doing the following within an iteration, alongside development:

  • work as a sounding board with development on emerging designs
  • help generate test ideas prior to story development (generative TDD)
  • help generate test ideas during story development (elaborative TDD)
  • provide initial feedback on a story under development
  • test a story that has completed development
  • integration test the product developed to date

Of note, when we are testing alongside development, we can actually engage in more testing activities than when working in phases (or in a “testing” phase near the end). We are able to complete more testing, but that can require that we use more testers to still meet our timelines. As we incur more testing debt throughout a project, we have some options for dealing with it. One is to leave off story testing in favour of integration testing. I don’t really like this option; I prefer keeping the feedback loop as tight as we can on what is being developed now. Another is to schedule a testing phase at the end of the development cycle to do all the integration, “ility”, system testing etc. Again I find this can cause a huge lag in the feedback loop.

I prefer a trade-off. We have as tight a feedback loop on testing stories that are being developed so we stay in synch with the developers. We do as much integration, system, “ility” testing as we can in each iteration, but when we are running out of time, we incur some testing debt in these areas. As the product is developed more (and there is now much more potential for testing), we bring in more testers to help address the testing debt, and bring on the maximum number we can near the end. We schedule a testing iteration at the end to catch up on the testing debt that we determine will help us mitigate project risk.

There are several kinds of testing debt we can incur:

  • integration testing
  • system testing
  • security testing
  • usability testing
  • performance testing
  • some unit testing

And the list goes on.

This idea is very much a work-in-progress. Colin and I have both noticed that on the development side, we are also incurring testing debt. Testing is an area with enormous potential, as Cem Kaner has pointed out in “The Impossibility of Complete Testing” (Presentation) (Article).

Much like technical debt we can incur it unknowingly. Unlike refactoring, I don’t know of a way to repay this other than to strategically add more testers, and to schedule time to pay it back when we are dealing with contexts other than program code. Even in the code context, we still may incur testing debt that refactoring doesn’t completely pay down.

How have you dealt with testing debt? Did you realize you were incurring this debt, and if so, how did you deal with it? Please drop me a line and share your ideas.

Test Automation is Software Development

This is a concept I can’t stress enough: test automation is software development. There really is no getting around it. Even if we use a record/playback testing tool, some sort of code is generated behind the scenes. This is nothing new as people like James Bach and Bret Pettichord have reminded us for years. Attempts to automate software development have been around for a while. Here’s a quote that Daniel Gackle sent to me in “Facts and Fallacies of Software Engineering” by Robert Glass:

Through the years, a controversy has raged about whether software work is trivial and can be automated, or whether it is in fact the most complex task ever undertaken by humanity. In the trivial/automated camp are noted authors of books like “Programming without Programmers” and “CASE — The Automation of Software” and researchers who have attempted or claim to have achieved the automation of the generation of code from specification. In the “most complex” camp are noted software engineers like Fred Brooks and David Parnas.

Software testing is also a non-trivial, complex task. Dan Gackle commented on why what Glass calls the “trivial/automated camp” still has such currency in the testing world, and has less support in the development world:

It’s a lot easier to deceive yourself into buying test automation than programming automation because test automation can be seen to produce some results (bad results though they may be), whereas attempts to automate the act of programming are a patently laughable fiasco.”

I agree with Dan, and take this one step further: attempting to automate the act of software testing is also a fiasco. (It would be laughable if it weren’t for all the damage it has caused the testing world.) It just doesn’t get noticed as quickly.

If we want to automate a task such as testing, first of all, we need to ask the question: “What is software testing?” Once we know what it is, we are now ready to ask the question: “Can we automate software testing?”

Here is a definition I’m comfortable with of software testing activities (I got this from James Bach):

  • Assessing product risks
  • Engaging in testing activities
  • Asking questions of the product to evaluate it. We do this by gathering information using testing techniques and tools.
  • Using a mechanism by which we can recognize a problem (an oracle)
  • Being governed by a notion of test coverage

What we call “test automation” really falls under the tools and techniques section. It does not encapsulate software testing. “Test automation” is a valuable tool we can use in our tester’s toolbox to help us do more effective testing. It does not and can not replace a human tester, particularly at the end-user level. It is a sharp tool though, and we can easily cut ourselves with it. Most test automation efforts fail because they don’t take software development architecture into account, they don’t plan for maintenance, and they tend to be understaffed, and are often staffed by non-programmers.

Test automation efforts suffer from poor architecture, bugs (which can cause false positives in test results), high maintenance costs, and ultimately unhappy customers. Sound familiar? Regular software development suffers from these problems as well, but we get faster and louder feedback from paying customers when we get it wrong in a product. When we get it wrong in test automation, it is more insidious; it may take a long time to realize a problem is there. By that time, it might be too late. Customers are quietly moving on to competitors, talented testers are frustrated and leaving your company to work for others. The list goes on.

This attitude of a silver bullet solution to our problems of “test automation” contributes to the false reputation of testing as a trivial task, and testers are blamed for the ultimate poor results. “Our testers didn’t do their jobs. We had this expensive tool that came with such great recommendations, but our testers couldn’t get it to work properly. If we can hire an expert in “Test Company X’s Capture/Replay Tool”, we’ll be fine.” So instead of facing up to the fact that test automation is a very difficult task that requires skill, resources, good people, design, etc. we hire one guy to do it all with our magic tool. And the vicious circle continues.

The root of the problem is that we have trivialized the skill in software testing, and we should have hired skilled testers to begin with. When we trivialize the skill, we are now open to the great claims of snake-oil salesmen who promise the world, and underdeliver. Once we have sunk a lot of money into a tool that doesn’t meet our needs, will we admit it publicly? (In many cases, the test tool vendors forbid you from doing this anyway in their license agreements. One vendor forbids you from talking at all about their product when you buy it.)

In fact, I believe so strongly that “test automation” is not software testing, I agree with Cem Kaner that “test automation” is in most contexts (particularly when applied to a user interface) a complete misnomer. I prefer the more correct term “Computer Assisted Testing”. Until computers are intelligent, we can’t automate testing, we can only automate some tasks that are related to testing. The inquiry, analysis, testing skill etc. is not something a machine can do. Cem Kaner has written at length about this in: Architectures of Test Automation. In software develpment, we benefit greatly from the automation of many tasks that are related to, but not directly attempting to automate software development itself. The same is true of testing. Testing is a skilled activity.

Anyone who claims they can do software test automation without programming is either very naive themselves, or they think you are naive and are trying to sell you something.

Testers and Independence

I’m a big fan of collaboration within software development groups. I like to work closely with developers and other team members (particularly documentation writers and customers who can be great bug finders), because we get great results by working closely together.

Here are some concerns I hear from people who aren’t used to this:

  • How do testers (and other critical thinkers) express critical ideas?
  • How can testers integrated into development teams still be independent thinkers?
  • How can testers provide critiques of product development?

Here’s how I do it:

1) I try very hard to be congruent.

Read Virginia Satir’s work, or Weinberg’s Quality Software Management series for more on congruence. I work on being congruent by asking myself these questions:

  • “Am I trying to manipulate someone (or the rest of the team) by what I’m saying?”
  • “Am I not communicating what I really think?”
  • “Am I putting the process above people?”

Sounds simple, but it goes a long way.

We can be manipulative on agile teams as well. If I want a certain bug to be fixed that isn’t being addressed, I can subtly alter my status at a daily standup to give it more attention (which will eventually backfire), or I can be congruent, and just say: “I really want us to focus on this bug.”

Whenever I vocalize a small concern even when the rest of the team is going another direction, it is worthwhile. Whenever I don’t, we end up with problems. It helps me retain my independence as an individual working in a team. If everyone does it, we get diverse opinions, and hopefully diverse views on potential risks instead of getting wrapped up in groupthink. Read Brenner’s Appreciate Differences for more on this.

Sometimes, we ignore our intuition and doubts when we are following the process. For example, we may get annoyed when we feel someone else is violating one of the 12 practices of XP. We may harp on them about not following the process instead of finding out what the problem is. I have seen this happen frequently, and I’ve been on teams that were disasters because we had complete faith in the process (even with Scrum, XP), and forgot about the people. How did we put the process over people? Agile methods are not immune to this problem. On one project, we ran around saying “that isn’t XP” when we saw someone doing something that didn’t fit the process. In most cases it was good work, but it turned out to be a manipulative way of dealing with something we saw as a problem. In the end, some of them were good practices that should have been retained in that context, on that team. They weren’t “textbook” XP, but the people with the ideas knew what they were doing, not the inanimate “white book”.

2) I make sure I’m accountable for what I’m doing.

Anyone on the team should be able to come up and ask me what I’m doing as a tester, and I should be able to clearly explain it. Skilled testing does a lot to build credibility, and an accountable tester will be given freedom to try new ideas. If I’m accountable for what I’m doing, I can’t hide behind a process or what the developers are doing. I need to step up and apply my skills where they will add value. When you add value, you are respected and given more of a free reign on testing activities that might have been discouraged previously.

Note: By accountability, I do not mean lots of meaningless “metrics”, charts, graphs and other visible measurement attempts that I might use to justify my existence. Skilled testing and coherent feedback will build real credibility, while meaningless numbers will not. Test reports that are meaningful, kept in check by qualitative measures, are developed with the audience in mind, and are actually useful will do more to build credibility than generating numbers for numbers sake.

3) I don’t try to humiliate the programmers, or police the process.

(I now see QA people making a career out of process policing on Agile teams). If you are working together, technical skills should be rubbing off on each other. In some cases, I’ve seen testing become “cool” on a project, and on one project not only testers were working on it, but developers, the BA and the PM were also testing. Each were using their unique skills to help generate testing ideas, and engage in testing. This in-turn gave the testers more credibility when they wanted to try out different techniques that could reveal potential problems. Now that all the team members had a sense for what the testers were going through, more effort was made to enhance testability. Furthermore, the discovery of potential problems was encouraged at this point, it was no longer feared. The whole team really bought into testing.

4) I collaborate even more, with different team members.

When I find I’m getting stale with testing ideas, or I’m afraid I’m getting sucked into groupthink, I pair with someone else. Lately, a customer representative has really been a catalyst for me for testing. Whenever we work together, I get a new perspective on project risks that are due to what is going on in the business, and they find problems I’ve missed. This helps me generate new ideas for testing in areas I hadn’t thought of.

Sometimes working with a technical writer, or even a different developer, instead of the developer(s) you usually work with helps you get a new perspective. This ties into the accountability thought as well. I’m accountable for what I’m testing, but so is the rest of the team. Sometimes fun little pretend rivalries will occur: “Bet I can find more bugs than you.” Or “Bet you can’t find a bug in my code in the next five minutes.” (In this case the developer beat me to the punch by finding a bug in his own code through exploratory testing beside me on another computer, and then gave me some good-natured ribbing about him being the first one to find a bug.)

Independent thinkers need not be in a separate independent department that is an arm’s length away. This need for an independent testing department is not something I unquestionably support. In fact, I have found more success through collaboration than isolation. Your mileage will vary.

Consultants Camp Report

In September, I attended Consultants Camp. This was my first time there, and I wasn’t sure what to expect. I had the camp handbook, and read a post by Dale Emery describing Camp. (It’s worth a read, like all of Dale’s writings.)

This was about all I knew of Camp when I flew out of Calgary. Now that I’ve attended Camp, and have had time for reflection, I have something to share. Camp had a big effect on me, and I got a lot out of it. I came back home and Elizabeth said: “This has been good for you. You’ve come home rested, energized and full of new ideas.” I struggled with a good word to describe what Camp meant for me, and found an article called Renewalby fellow Camper Rick Brenner. “Renewal” describes what I felt after attending Camp. Rick says:

Just about every year I attend a conference called Consultants’ Camp. It’s a group of consultants, IT specialists and process experts who meet annually in Mt. Crested Butte, Colorado, for a week of self-organized collegiality and fun. In some ways, it’s a conference like any other — there’s a continental breakfast, days full of sessions, and there is a program. By the end of the conference many of us feel tired and full. Learning is everywhere.

In other ways Camp is unique. The setting, the Colorado Rockies, is inspirational. Attendees give all sessions. There is no sponsor. Every day, there’s a long break in mid-afternoon, which today I’m using to write this essay. Lunch isn’t provided, but most of us ante up and share soup and sandwiches and stimulating conversation. For me, and I think for all of us, there’s a feeling of belonging.

Renewal is a time to step out of the usual routine and re-energize. I feel good to be here, with these people — colleagues and friends. Renewal can be a large block of time, as Consultants’ Camp is, or it can be a few minutes. We find renewal in weekends, vacations, days off, even in a special evening or hour in the midst of routine.

I couldn’t have said it better myself.

Camp had some times of joy, and there were difficult times we experienced with sadness. I talked with other first-time campers, and a common theme emerged: Consultants Camp is a community that is not only highly intellectual, but also understands people’s humanity and emotion. This is a community that cares for each other.

One of my favourite parts of Consultants Camp was the opportunity to spend a lot of time with James Bach. James generously offered to spend time with me talking about the testing craft, and having me work through testing exercises. I learned a tremendous amount from the time we spent together, and I have renewed direction on improving my skills as a tester.

I’ve followed James since about 1999. I had been testing as an intern student for a few months, and was in a unique position of leadership. I found I was struggling to explain what I was doing when training other testers. I realized I was too focused on technology and I was drawing from other disciplines when testing that I hadn’t realized. Since I had studied Philosophy and Business as well as Technology, there were non-computer related influences I didn’t realize coming through in my thinking about testing. I had studied Inductive Logic in university, and found when I read James’ work, the correlation of software testing with philosophy I had been bumping up against was spelled out already for me. I was pleased to see James recommend books on Abductive Inference, and I started reading everything I could that he had written. What he was saying was matching my beliefs and experience, and he had a lot of great information for me to use.

James continues to be an influence because I greatly respect his commitment to teaching testing skills in the software testing field. Furthermore, my experiences show that what he says is true. We tend to think alike when it comes to testing, and I have the benefit of his knowledge and experience to draw on in my own work. I (and other testers) are indebted to his willingness to share great ideas, and teach what he knows.

I have used James’ Heuristic Risk-Based Testing strategy ideas a lot, particularly on agile teams. However, it wasn’t until I worked with James that I really began to own the concept of heuristics, and developingmnemonics to help remember and apply my own heuristics when testing.

I was pleased when I was finally able to meet James in person for the first time last year, and during Consultants Camp we picked up where we left off. James posed problems, I worked on solving them, and he continuously challenged me to push myself harder as a tester and as a thinker. I learned a lot, much like I have from other great (and very rare) teachers I’ve had who have challenged me this way: Dr. Michael Kubara, Dr. John Rutland, and my Father come to mind. We also spent time working on the motivations behind the Context-Driven Testing School principles, and talking about testing, working and life. I appreciate the time we were able to spend together, and have a lot of work to do and new areas to explore in my learning.

Testing and Writing

I was recently involved on a project and noticed something interesting. I was writing documentation for a particular feature, and had a hard time expressing it for a user guide. We had made a particular design decision that made sense, and it worked intuitively but was difficult to write about coherently. As time went on, we discovered through user-feedback that this feature was confusing. I remembered how I had struggled to write about it, and shared this experience with my wife who worked as a technical writer for several years . She replied that this is very common. A lot of bugs that she and other writers found simply because a software feature was awkward to write about.

I’ve found that story cards in XP, or Scrum backlog items can suffer from incoherence, like any other written form. Incoherently written requirements, stories, use cases, etc. are frequently a sign that there is a problem in the design. There is something about actually writing down thoughts that helps us identify incoherence. When they are thoughts, or expressed verbally, we can defend problem spots with clarifications. When they are written down, we see them in a different light. What we think we understand may not be so clear when we capture our thoughts on paper.

Some of the most effective testers I’ve worked with were technical writers. They had a different perspective on the project than the rest of us because they had to understand the software and write about it in a way an end-user would understand. They find problems in the design when they struggle with writing about a feature. Hidden assumptions can come to light suddenly through capturing ideas on paper, and reading them. When testing, try writing down design ideas, as well as testing ideas, and see if what you write is coherent. Incoherent expressions of product features or test ideas can tell you a lot about development and testing.

Remuneration and Punished by Rewards

Lately I’ve been reading Punished by Rewards by Alfie Kohn. He shows how incentives can work to get people to do things in the short-run, but ultimately fail. I frequently get asked how to measure tester’s performance, and I am often at odds with people looking for hard numbers to measure and reward testers by. “Without numbers on _____, how can I do a performance evaluation? How can I use “pay for performance” unless I have hard numbers? I need to measure testers to motivate them.” Alfie Kohn explains why these kinds of practices can backfire better than I can. If you are a manager, teacher, or leader of any sort with people reporting to you, I urge you to read this book. (If you aren’t in a position of leadership, feel free to read it too.) Even if you disagree with it, at least it will provide a different perspective.

Questions about fees come up for anyone who charges for their services, and I was pleased that Alfie Kohn dealt with this issue in chapter 10 of Punished by Rewards. On p. 183:

When someone contacts me about giving a lecture or writing an article, I ask how much money is involved and often negotiate for the maximum amount that seems to be fair and that the organization can afford to pay. Then, assuming we have come to an agreement, I do my best to not think about the money again. I do this because I fear the consequences of construing what I am doing in terms of what I am being paid: eventually I might find myself thinking, “A is paying me twice as much as B, so I’d better do twice as good a job for A.” If I ever reach that point, my integrity will be gone, and my intrinsic motivation will have fled along with it.

Kohn goes on to explain that if we don’t decouple the task from the compensation, we run the risk of being too focused on what we get at the end at the expense of what we are supposed to be working on right now. That “intrinsic motivation” is why we do the work we do, and why we do it well. If we sacrifice it by not being ethical, or allowing money to take precedence, our work will suffer.
Money, status, fear, or “gold stars” are not good motivators. Kohn answers the question about motivation on p. 181 of Punished by Rewards:

…it is possible to get people to do something. That is what rewards, punishments and other instruments of control are all about. But the desire to do something, much less to do it well, simply cannot be imposed; in this sense, it is a mistake to talk about motivating other people. All we can do is set up certain conditions that will maximize the probability of their developing an interest in what they are doing and remove the conditions that function as constraints.

On p. 182, Kohn echos W. Edwards Deming: “Pay is not a motivator.” Kohn says this about pay:

Pay people generously and equitably. Do your best to make sure they don’t feel exploited. Then do everything in your power to help them put money out of their minds.

This might sound foreign at first, but think about how groups of people who do extraordinary things are motivated. Look at Open Source software where legions of talented people work hard for no pay. Look at religious and political organizations where people believe in a cause, and are willing to commit to it and share their talents and time. Pay isn’t a motivator for people who believe in something they feel is important. Their intrinsic motivation is powerful, and when they become disillusioned, they drift. Often, when people lose track of the purpose, it is replaced by the appearance of purpose, which can be dangerous.

Be careful what you measure, and what you reward based on those measurements. If that intrinsic motivation to do work is clouded by the rewards, we will get results that are surprising, and not necessarily in line with our goals as leaders. In fact, we may eventually frustrate our overall goals as they are superseded by measurement and rewards at the micro level. On p. 183, Kohn says:

Providing feedback that employees can use to do a better job ought never to be confused or combined with controlling them by offering (or withholding) rewards.

He goes on to explain how this kind of action can have disastrous results. Those of us who lead testers need to be sure that the way we measure and reward does not distort or remove the tester’s intrinsic motivation for testing. If the tester has no intrinsic motivation anymore, we run the risk of not getting the kind of work completed that the company hopes for.

Why Are We in Business?

When I look at my work in software testing, I find myself at the intersection of three disciplines:

  • Technology
  • Philosophy
  • Business

I have business training, with a focus on technology. Sometimes I forget about the business, and focus on testing ideas, technology concerns, and following a process. As a tester, I think about the product, and its impact on users, but I forget about what really keeps us in business: the sale of a product or service. Whether a product might be saleable or not might be a testing question we should ask.

It matters not a whit what our process is if we aren’t bringing in as much or more money than our business is shelling out. I have seen fabulous execution of software development processes in companies that went under. I’ve also seen completely chaotic development environments in wildly successful companies. What was the secret? One group focused on sales, on making the product easy to use, and really listened to the customer. In the end, the difference between the development teams having a job or not did not come down to technology or process decisions, but whether the company could sell the product.

Technology, process and engineering considerations are useful to the extent that they help the company feed the bottom line. If a process helps us deliver a product that customers are happy with, then great. However, we need to be careful on what we are measuring. We need to look at the big picture, and not be self-congratulating because of adherence to our favorite process. If the company isn’t making enough money, it simply won’t matter. In time we’ll be a project team looking for work, lamenting about how great our process was at the last company we were at.

As Deming said, everyone within a company is responsible for quality. I also believe we are all responsible for the company’s success in the market. Focusing only on engineering, technology and process may not help if we are a company that can’t sell our products, and can’t satisfy and impress a customer. If we reward employees for adherence to a process, how long will it take for those rewards to become more important than the goals of the business? At the heart of every business is this equation:

Revenue - Expenses == Profit

Without profits, the business will eventually cease to exist. In software development, it pays to keep this in mind. Too often we are distracted by details in our daily work, and forget the basics of business.

Jerry Weinberg on Test Automation

I was recently part of a discussion on SHAPE about automated unit and functional testing. I was talking about buggy test code, and how frustrating (and sadly common) it can be to have automated test code that is unreliable. As a result, I try to keep test case code as simple and short as possible in an attempt to reduce bugs I might inadvertantly create. I do what I can to test the test code, but it can get out of control easily. It’s hard to unit test test code, and then I wonder why do I need tests for my test code? Isn’t that a smell that my test code is too complex?

I was musing that there is nothing more frustrating than buggy test code and buggy test harnesses, and Jerry agreed. He then pointed out something interesting. Jerry said:

There’s a kind of symmetry to all this, so that someone from Mars might not know, without being told, which was the product code and which was the test code. Each tests the other.

I had never thought of test code and its relationship to production code in this way before. This is a powerful and profound way of thinking about how we can “test our tests”, and is especially true with automated xUnit tests. It also strengthens the idea that automated test code needs to be treated as seriously as production code, and deserves the same kind of design, planning, resources and people behind it. Without symmetry, problems inevitably arise in test code especially.

Thank You

When working as an independent consultant, I am the face to the world for my business. However, there are people who inspire me, push me to do better, and help me get things done that I want to thank. Running a small business, writing articles, speaking and doing work for clients during the day is a lot of work. I couldn’t do it alone. My wife and I both do work for our small business, but there are others who inspire, encourage and help me learn and work on ideas.

Thanks to my wife Elizabeth who copy edits much of what I write, and handles scheduling, bookkeeping and day to day business tasks. She patiently listens and acts as a sounding board for ideas, and willingly edits for me at all sorts of hours when I’m inspired to write. I could not do what I do without her help and support.

Thanks to Shannon Hale who does technical editing for me and helps me express ideas more coherently. Shannon is a great editor; when she makes changes I don’t really notice. She’s also great with User Experience ideas, encourages good ideas, gives a thumbs down on the not so good ones, and is a go to person when I have programming questions.

Thanks to James Bach for inspiration, for being willing to share and for his influence on my testing career. James’ work helped reawaken what I had learned in Philosophy classes in university and realize I was using this training when testing. James also encouraged me to go independent, and provides up front feedback on testing ideas, and pushes me to do better. James has helped me a tremendous amount, and I wouldn’t be doing what I’m doing without his example, willingness to share ideas and encouragement.

Thanks to Mark McSweeny, who is the best programmer I’ve ever had the privilege of working with, not to mention one of the deepest thinkers I know. Mark is a go to person for almost any question I might have, and is a great encouragement. He is a great teacher, and an invaluable mentor in software development and life.

Thanks to Javan Gargus for teaching me about testing from early on in my career and for providing valuable feedback on ideas. I know if Javan likes an idea, I’ve expressed it clearly and it’s a keeper. Javan has told me what he thought I should hear whether I wanted to or not, all while being a supportive colleague and friend.

Thanks to Brian Marick for listening and responding to my ideas and teasing out concepts in my work that I hadn’t thought of. Brian is the first editor to ask me to write an article, and has taught me a lot about writing, testing and learning. Brian is an early promoter of some of my ideas, and has taught me a lot about the work I do.

I’m probably missing others, but I wanted to publicly thank these particular people for the impact they have had on my life. Without their help and support, I would not be able to accomplish half as much as I do.