My first game of TestSphere

Today (as I write this, last week as it is published) I had my first experience playing TestSphere. I’ve had a deck for ages but only recently suggested trying to play it with the QA community of practice in my department. Going from never having played it at all to facilitating a session with a whole group was quite a leap and I wasn’t at all sure how it would go. Here’s some of my observations about the experience.

Test sphere cards laid out on a table

Seven thoughts about TestSphere

1. Ten’s a crowd: The weekly meeting of the group usually has anywhere from 4 to 16 people attending, with the typical number around 12. I planned on playing the standard game, which the box says is best for 4 to 8 people. I was prepared to split us into two groups if needed, but in the end tried playing with the full group of 10 that came that day.

2. One for all or a bunch for each: The instructions say to reveal one or more cards depending on the experience level of the group, though it’s not clear to me which way those should correlate. I decide to go with one card of each colour so there would be a variety of types ofthings to think about. This turned out to be exactly the wrong number. Though I deliberately put us as a small table, people still had to pick up cards from the middle to read them. As soon as we started, 5 people were reading cards and 5 people were doing nothing. Should I do this again, I would try one extreme or the other: 1 or 2 cards that the whole group could focus on together, or 3-5 cards each to think about independently and have people play cards from their own hand. In the latter case I can then imagine combo play (“I have a card that applies to that story!” or “I have an experience with that too, plus this other concept from my hand”) but let’s not get carried away.

3. Combining cards: Nobody attempted to combine multiple cards into a single story, which I thought would be part of the fun of trying to “win”. This may have just been because people were passing cards around one at a time rather than looking at them as a group. I suspect it would have been easier to combine cards with fewer people or ones that was already familiar with the cards.

4. Minimalism: We didn’t make use of most of the text on the cards. The examples are great and really show the amount of good work Beren Van Daele and the MoT put into designing the deck, but it was just too much to make use of in this format. While the extra text is useful to fully understand the concept, a minimal deck with just the concept, slogan, and a simple graphic might be less intimidating. (The Easter egg here is that Minimalism is one of the cards we talked about in our group today; going back and reading the card again I’m really torn by this since the examples really do illuminate it in a way the slogan alone doesn’t, and the three are so different from each other that even limiting it to one would not be quite the same.)

5. Waiting patiently: The group naturally developed a pattern of picking up new cards as soon as they came up and holding on to them until it was their turn to tell the story. I wouldn’t say that I expected it to be a raucous fight for cards and who got to tell their story first, but I didn’t expect it to be so calm and orderly either. Once or twice this resulted in someone who had picked up a card just to read it seemingly getting stuck into telling a story about that card whether they meant to or not.

6. Everybody had a story: The energy of the game varied quite a bit depending on who was speaking. Some people are just better story tellers or more comfortable with public speaking than others. Nonetheless, I was quite happy that nobody dominated the conversation too much, and by the end everybody had shared at least once. I had laid out a rule at the beginning that if two people had a story to share we would defer to whoever hadn’t spoken yet, but we only had to invoke it once.

7. My QA is not your QA: Several times I was surprised with the stories people told given the card they picked up, often struggling to see what the connection was. To me this illustrates how differently people think, which would keep this interesting to play with another group of people. Not only that, but they’ll likely work quite easily outside of QA circles. At one point we had only one person left who hadn’t collected any cards yet. “I’m a developer,” he said, “I only have developer stories.” But when prompted he was able to pick up a card just as easily as anybody else.

The forgotten debrief

In the end, we shared about 15 stories in 50 minutes. Overall I think it was a good experience, and it was a neat way to hear more about everybody’s experiences on other teams. Unfortunately I didn’t manage time well and we got kicked out of the meeting room before I had a chance to debrief with anybody about their experience with the game. Some ideas for focus questions I had jotted down (roughly trying to follow an ORID model) were:

  1. What are some of the concepts and examples that came up on the cards?
  2. Were there concepts someone else talked about that you also had a story for? Were any concepts totally new to you?
  3. Did anything surprise you about the experiences others shared? What did you learn about someone that you didn’t know before? What did or didn’t work well about this experience?

and finally:

  1. Would you play again?

Testing like you’re laughing

I was in a brainstorming meeting recently. The woman running the meeting started setting up an activity by dividing in the board into several sections. In one, she wrote “Lessons Learned” and in a second she wrote “Problem Areas”. The idea was that we’d each come up with a few ideas to put into each category and then discuss.

I immediately asked, “What if one of the lessons I learned is that we have a problem area?”

To her credit, she gave a perfectly thoughtful and reasonable answer about how to differentiate the two categories. The details don’t matter; what was important was that others in the room started joking that, as the “only QA” in the room, I immediately started testing her activity and trying to break it. This was all in good fun, and I joked along saying “Sorry, I didn’t mean to be hard on you right away.”

“You’re the QA,” she said, “It’s your job!”

This tickled an idea in the back of my mind but it didn’t come to me right away. Later that day, though, I realized what the answer to that should have been:

“As long as I’m QA-ing with you, not at you.”

Footnote: There’s nothing significant in the use of “QA” over “testing” here; I’m using “QA” only because that’s the lingo used where I am. It works just as well if you replace “QA” with “tester” and “QA-ing” with “testing”, whether or not you care about the difference.

Agile Testing book club: Everyone is a Tester

If you’ve even dipped your toe into the online testing community, there’s a good chance that you’ve heard Agile Testing by Janet Gregory and Lisa Crispin as a recommended read. A couple weeks ago I got my hands on a copy and thought it would be a useful exercise to record the highlights of what I learn along the way. There is a lot in here, and I can tell that what resonates will be different depending on where I am mentally at the time. What I highlight will no doubt be just one facet of each chapter, and a different one from what someone else might read.

So, my main highlight from Chapters 1 and 2:

everyone is a tester

Janet and Lisa immediately made an interesting distinction that I would never have thought of before: they don’t use “developer” to refer to the people writing the application code, because everybody on an agile team is contributing to the development. I really like emphasizing this. I’m currently in an environment where we have “developers” writing code and “QA” people doing tests, and even though we’re all working together in an agile way, I can see how those labels can create a divide where there should not be one.

Similarly surprising and refreshing was this:

Some folks who are new to agile perceive it as all about speed. The fact is, it’s all about quality—and if it’s not, we question whether it’s really an “agile” team. (page 16)

The first time I encountered Agile, it was positioned by managers as being all about speed. Project managers (as they were still called) positioned it as all about delivering something of value sooner than possible otherwise, which is still just emphasizing speed in a different way. If asked myself, I probably would have said it was about being agile (i.e., able to adapt to change) because that was the aspect of it that made it worth adopting compared to the environment we worked in before. Saying it’s all about quality? That was new to me, but it made sense immediately, and I love it. Delivering smaller bits sooner is what lets you adapt and change based on feedback, sure, but you do that so you end up with something that everyone is happier with. All of that is about quality.

So now, if everybody on the team should be counted as a developer, and everything about agile is about delivering quality, it makes perfect sense that main drive for everybody on the team should be delivering that quality. The next step is obvious: “Everyone on an agile team is a tester.” Everyone on the team is a developer and everyone on the team is a tester. That includes the customer, the business analysts, the product owners, everybody. Testing has to be everybody’s responsibility for agile-as-quality to work. Otherwise how do you judge the quality of what you’re making? (Yes, the customer might be the final judge of quality means to them, but they can’t be the only tester any more than a tester can be.)

Now, the trick is to take that understanding and help a team to internalize it.

What the Bug!? (An attempt at knowledge sharing, two ways)

When I was making the transition from waterfall style projects to agile teams in a previous company, one of the main things I struggled with was the loss of the testing team as we all became generic “software engineers”. We all still did the same testing tasks, but none of us were “testers” any more. There were a lot of positive effects from the change, but I kept feeling like without dedicated testers focused on improving our testing craft, we’d stagnate.

What I was missing, as I eventually realized, was a testing community. A recent episode of the AB Testing podcast talked all about building a community of testers, which brought all of this back to mind. A few of the ideas Alan and Brent talked about I had actually tried at the time. One in particular was the idea of highlighting major fails of the week in a newsletter, even offering prizes for the “winner”. Back in 2016 I heard a similar idea at a STARCanada talk, where the engineering group at AppFolio would send an email to everybody in the company for every bug they found describing what was learned, again emphasizing that finding these bugs was a positive thing, not a blame game. (Sorry I can’t find now who gave that talk; if it was you let me know!)

The reason the idea stuck with me at the time was primarily because I had started to notice that as our newly agile teams specialized in subsets of what was previously a monolithic product, we started to loose visibility on bugs that other teams ran into. Different teams were getting bit by the same issues. It didn’t help that the code base was also old enough that newer people would run into old bugs, spend potentially hours debugging it, and then hear “oh yeah, that’s a known problem.” (My response to that was to scream “It isn’t known if people don’t know it!” silently to myself.)

Here’s what I tried to do about it:

What The Bug!?
Honestly, the part of the idea I’m most proud of might be the name

I wanted teams to start talking more about the bugs they found, both so that others could learn from them and so that we could all tune our spidey senses to the sorts of issues that were actually important. There wasn’t much appetite for an email newsletter—people didn’t seem to read the newsletters the company already had—but we ended up trying two alternatives, one of which was pretty successful and one of which really wasn’t.

Building A knowledge base

The first idea was to solicit short and easily digestible details about every production bug that got logged into our bug tracker. Anybody who logged a bug would get an email asking them to answer three questions. The key was that the answers had to be short—think one tweet—and written at a level that any developer in the company would be able to understand. Bonus points for memorable descriptions. The questions were roughly:

  1. What was the bug?
  2. What was important about it?
  3. What one lesson can we learn from it?

The answers were linked back to the original ticket and tagged with a bunch of meta data like which team found it, the part of the system it was found in, what language the code was in, and any other keywords that would make it easy to find again. The idea was that if I was going to start writing something in a new language or working on a new part of the system, I could go look up the related tags and immediately find a list of easily digestible problems that I should stay alert for. I think it was an okay idea, but there were issues.

First problem: People were pretty bad at writing descriptions of bugs that were short, but also useful and interesting. It not only took a lot of creativity, but in order to do it well you also really had to examine what was important about the bug in the first place. The example I used as a bad answer to the second question was “This caused an error every Tuesday”; What caused what kind of error every WHY TUESDAY!? This was especially problematic for the third question, where often the answers that came back were “testing is important” or “we’ll test this next time”. True, but shallow. What I was really hoping for would have been “There are different kinds of Unicode characters, so always consider testing different character planes, byte lengths, and reserved code points”. To really make the knowledge base as useful as it could have been, it would have needed committed editors who would talk to the people involved and craft a really good summary with them.

Second problem: The response rate was pretty lousy. It might have been that targeting every production bug was just too much. Not everybody is going to see something interesting in every bug, and not everybody is as interested in learning from them. That was part of the culture that I wanted to change—I wanted everybody else to be as excited about these bugs as I was!—but it wasn’t going to happen over night.

Third problem: It might seem minor, but the timing of the email prompt coming the day after the bug was logged was often just too soon to have digested what the real lesson learned was. This turned up as a problem as I asked around about why people weren’t taking part. They just didn’t have the answers yet.

All of this created a chicken-and-egg problem. Until people saw the value of this project, saw interesting summaries, and got excited to contribute their own experiences, we wouldn’t get the content we needed to build that interest or excitement. And, in all honesty, though there was a conscious effort with this to make an accessible library of bugs compared to the technical JIRA tickets, we were still basically asking people to log a bug in one database every time they logged a bug in another database. We needed something more active and engaging.

Welcome to the What The Bug!? Show

At the time we already had a weekly all-hands meeting every Friday afternoon where anybody could contribute a segment to demo something, talk about something interesting, or anything else they wanted. I was doing short segments on quality and testing topics every few weeks to try to promote testing in general, but it was largely a one-man show. A fellow tester/developer that I was working with on the What The Bug!? knowledge base had the idea to take just a couple minutes each week to present our favourite bugs of the week.

Turning a passive library of bugs into a weekly road show was a big success. We were basically getting the benefit of a bug-of-the-week newsletter with the added bonus of an already established live audience. Again, the branding helped to sell it, and because the two of us both embraced the idea that any production bug could be turned into an 2 minute elevator pitch for something interesting to learn, we were able to make pretty fun presentations. We at least got laughs and had people thinking about testing if only for 5 minutes, but the real highlight for me was that afterwards I had people come up to me and say “you know, I found something last week that would make a good segment next time.”

This was possible in part because she and I had both spent that time soliciting user-friendly descriptions from people logging bugs, so we knew what had been going on across all the teams for the last couple weeks. We had a lot to choose from for the feature. It would have been harder to do without that knowledge base, being limited to only the bugs we knew from the teams we worked directly with. I suspect, though, that once the weekly What The Bug!? segments built up enough momentum that people took on presenting their own bugs, the need for the email prompts and the knowledge base would fade away. An archive of the featured bugs could still be a useful resource for new people coming on board, but it would no longer be the primary driver.

Where are things now

I left the company shortly after starting What The Bug!?, but I recently had a chance to check in with an old colleague and inquire about where these two manifestations of the idea ended up. Unsurprisingly in retrospect, the knowledge base has largely fizzled. The combination of asking people to volunteer extra writing work and the low ROI of a poorly written archive doomed it pretty early on. On the flip side, bugs are still a regular topic at those weekly meetings, although I’m sorry to report that they dropped the What The Bug!? branding. If anybody else wants to try something similar, feel free to use the name, all I ask for is a hat tip. (A quick google of the phrase only turns up some etymologists and an edible insect company, but if someone else had the same idea in the context of software, do let me know and I’ll pass the hat tip on.)