I decided to analyze a game feature, the “quest“, which is used in popular video games, particularly MMORPGs. Quests have some compelling aspects for structuring testing activitues. Jane McGonigal‘s book “Reality is Broken” provided me with a solid analysis of quests, and how they can be adapted to real life activities. Working from her example of a quest (ch. 3 pp. 56) , I created a basic test quest format:
- Goal statement (what we intend to accomplish with our testing work)
- Why the goal matters (why are we testing this?)
- Where to go in the application (what technique or approach are we using to test?)
- Guidance (not detailed steps, but enough to help. Bonus points for using video or other rich media examples.)
- Proof of completion (how do you know when you are finished?)
A quest is larger than a single testing mission (or a test case), but is smaller than a test plan. It’s a way we can organize testing tasks to help provide a sense of completion and interest, but in areas that require exploration and creativity. Just like in a video game, there are multiple ways to satisfy a quest. Once we have fulfilled a quest, which might take days or hours, depending on how it is created, we can move on to another one. It’s another way of organizing people, with the added bonus of leveraging years of game design success. Furthermore, modern technology involves a lot of collaboration between people in different locations, using different technology to reach a common goal, and we need to adapt testing to meet that. Testing a mobile app in your lab, one tester at a time, won’t really provide useful testing for an app that requires real-time communication and collaboration for people all over the world. MMO’s do a fabulous job of getting people to work hard and co-ordinate activities in a virtual world, and people have fun doing it. I decided to apply it to testing.
Where do quests fit? Think in terms of a hierarchy of activities:
- test strategy and plan
- risks that are mitigated through testing
- different models of coverage that map to risk mitigation
- test quests
- sessions, tours, tasks
- feedback and reporting
A good test approach will have more than one model of coverage (check I SLICED UP FUN for 12 mobile coverage models), and under each model of coverage, there will be multiple quests. Sometimes quests will be repeated when regressions are required.
So why add this structure?
One area I have worked on over the years is using structure and guidance to help manage exploratory testing efforts. In the past, test case management systems provided some measure of coverage and oversight, but they have little in the way of intrinsic value for testers. People get tired of repeating the same tests over and over, but management love the metrics and they provide even though they are incredibly easy to cheat with. Furthermore, from a tester’s perspective there is an extrinsic reward that is inherent in the design of the tools, and they are easy to use. There is also a sense of completion, once I have run through X number of test cases, I feel like I have accomplished something.
With exploratory testing, the rewards are more intrinsic. The approach can be more fulfilling; I personally feel like I am approaching testing in a more effective way, and I can spend my time on high value activities. However, it is harder to measure coverage, and it is more difficult to direct people in areas where coverage is required without adding some guidance. There have been a lot of different approaches to adding structure to exploratory testing over the years to find a balance. Test quests are another approach to adding structure and finding that balance between the intrinsic rewards of pure exploratory testing, and the extrinsic rewards of scripted testing. This is an idea to provide a blend.
As many of you have heard me argue over the years, test cases and test case management systems are merely one form of guidance, there are others. In the exploratory testing community, you will see coverage outlines, checklists, mind maps, charter lists, session sheets, and media such as video demonstrations and all sorts of alternatives. When it comes to managing exploratory testing, one of the first places we start is to use session-based testing management. This approach helps us focus testing in particular areas, and provides a reviewable result, which makes our auditors and stakeholders happy. I’ve used it a lot over the years.
I’ve also used Bach’s General Functionality and Stability Procedure for over a decade to help organize exploratory testing. However, through experience, unique projects and contexts, I have adapted and moved away from the orthodoxy where I saw fit. However, when I started analyzing why people on my teams have fun with testing, SBTM and Bach’s General Functionality and Stability Procedures were big reasons why. Even though I often use a much more lightweight version of SBTM than he has created, people appreciate the structure. The General Functionality and Stability Procedures is a great example of guidance for analysis, exploration, and great things to do as testers.
The other side of fun on the teams I work on are related to humour, collaboration and technology. We often come up with nicknames, and divide up testing into teams and hold contests. Who can come up with the best test approach? Who recorded the best bug report video? Who found the most difficult to find bug last week? What team has the most pop culture references in their work? Testing is filled with laughter, excitement and learning, and some good plain old fashioned silly fun. We communicate constantly using technology to help stay up to speed on changes and progress, and often other team members want to get in on the action. Sometimes, it’s hard to get the coders to code, the product owners to product own, and the managers to manage, because everyone wants in on the fun. In the midst of this fun is incredibly valuable testing. Stakeholders are blown away by the productivity of testing, the volume of useful information produced, the quality of bugs, and the detailed, useful information from bug reports to status reports and quality criteria that is produced. While there is laughter and fun, there is hard work going on. I learned why this is so effective reading Jane McGonigal’s work.
In Reality is Broken, Jane McGonigal describes Augmented Reality Games (ARGs). These are real life activities that are gamified – they have a game-like structure applied to them. She mentions Chore Wars, and how gamifiying something as mundane as household chores can turn it into a fun activity. She mentions that since cleaning the bathroom is a high value activity in the game, her and her husband have to work hard to try to clean it before the other does. McGonigal explains that since there is a choice, and meaning attached to the task, people choose to do it under the mechanism of the game. It’s not that awful thing no one wants to do anymore because it is unpleasant, when framed within a game context, it is a highly sought after quest or task to complete. You get points in the game, you get bragging rights, you get intrinsic rewards as well as the extrinsic clean bathroom. Amazing.
If we apply that to testing, how about using lessons from ARGs to gamify things like regression testing, or test data creation, or other maintenance tasks we don’t like doing? One way we can do this is to sprinkle these tasks within quests. You can only complete the quest by finishing up one of these less desirable tasks.
In Reality is Broken, McGonigal defines a game as having four traits: a goal, rules, a feedback system, and voluntary participation (pp.21). Working backwards, in exploratory testing, a lot of what we do is voluntary because testers have some degree freedom to make decisions about what they are going to test, even if it is within narrow parameters of coverage. Furthermore, we can choose a different model of coverage to reach a goal. For example, I was working with an e-commerce testing team who were bored to death of testing the purchasing engine because they were following the same set of functional test scripts. To help them be more effective and to enjoy what they were doing, I introduced a new model of coverage to test the purchasing engine: user scenarios. Suddenly, they were engaged and interested and found bugs they had previously missed. I then helped them develop more models of coverage so that they could change their perspective and test the same thing, but with variation to keep them engaged and interested while still satisfying coverage requirements. As humans, we need to mix things up. Previously, they had no choice – they were told to execute the tests in the test case management system, and that was the end of it.
Feedback systems are often linked to bug reporting systems in testing. But I like to go beyond that. Bring in other people to test with you in pairs, trios or whatever combination to bring more ideas to the table. This isn’t duplicated testing, but a redoubling of brain power and effort. I also utilize instant messaging, IRC, and big visible charts to help encourage feedback across functional areas of teams.
Rules in testing are often related to what is dictated to us by managers, developers, and tradition. It boggles my mind how many so-called Agile programmers will demand their testers work in un-Agile ways, expecting them to create test plans, test cases and use test case management systems. When I ask the programmers if they would like to work that way, they usually say no. Well guess what, not many other homo sapiens like to work that way either. I prefer to have rules around approach. We have identified risks, and models of coverage to mitigate those risks, and we use people, tools and automation to help us reach our goals. Rather than count test cases and bugs, we rate our team on our ability to get great coverage and information that helps stakeholders make quality-related decisions.
Finally, a goal in testing needs to be project-specific. If you want to fail, you just copy what you did last time on your test project. The problem with that is you are unaware of any new risks or changes and you’ll likely be blind to them. Every project has a goal, a way we can measure whether we did the right sort of work to help reach that goal, rather than “run the regression tests, automate as many as possible, and if there is time, do other testing”, we have something specific that helps ensure we aren’t doing busy work, but we’re creating value.
When it comes to quests, they can have this format as well. A goal, a feedback system, rules or parameters on where to test, and voluntary participation. As long as all the quests are fulfilled for a project, it doesn’t matter who did them.
It turns out that my application of SBTM, Bach’s General Funcationality and Stability Procedure, plus some zany fun and utilizing technology to help socialize, report and record information, I was right next door to gamification. Using gamification as a guide, I hope to provide tools for others who also want to make testing effective and fun. A test quest is one option to try. Consider using avatars, fun names and anything that resonates with your team members to help make the activity more fun. Also consider rewards for difficult quests and tasks such as a free meal, public kudos, or time off in lieu. Get creative and use as much or as little from the video game world as you like.
Some of my goals with test quests are:
- Enough structure to provide guidance to testers so they know where to focus efforts
- Not so much structure (like scripted test cases) that personal choice, creativity and exploration are discouraged or forbidden
- Guidance and structure is lightweight so that it doesn’t become a maintenance burden like our scripted regression test cases become (both manual and automated)
- Testers get a sense of purpose, they get a sense of meaning in their work, and completion by completing a set of tasks in a quest
- Utilize tools (automated tests, automated tasks, simulators, high volume test automation, monitoring and reporting) to help boost the power of the testers and be more efficient and effective, and to do things no human could do on their own
- Encourage collaboration and sharing information so that testers can provide feedback to other project team members on the quality of the products, but also get feedback on their own work and approaches
- Encourage test teams to use multiple models of coverage (changing perspectives, using different testing techniques and tools) on a project instead of thinking of coverage as a singular thing
- Utilize an effective gaming structure to augment reality and encourage people to have fun working hard at testing activities
I am encouraging testing teams to use this as a structure for organizing test execution to help make testing more engaging and fun. Feel free to add as many (or few) elements from video game quests as you see fit, and alter to match the unique personalities and goals of the people on your team. Or, study them and analyze how you organize your testing work for you and your teams. Does your structure encourage people to have fun and work hard at accomplishing something great? If not, you might learn something from how others have managed to get people to work hard in games.
Happy questing!
3 thoughts on “Test Quests – Gamification Applied to Software Test Execution”