Agile Testing book club: Let them feel pain

This is the second part is a series of exercises where I highlight one detail from a chapter or two of Agile Testing by Janet Gregory and Lisa Crispin. Part one of the series can be found here. This installment comes from Chapter 3.

Let them feel pain

This chapter is largely about making the transition into agile workflows, and the growing pains that can come from that. I’ve mentioned before on this blog that when I went through that transition, I worried about maintaining the high standard of testing that we had in place. The book is coming from a slightly different angle, of trying to overcome reluctance to introducing good quality practices, but the idea is the same. This is the sentence that stuck out most to me in the whole chapter:

Let them feel pain: Sometimes you just have to watch the train wreck.

I did eventually learn this lesson, though it took probably 6 months of struggling against the tide and a tutorial session by Mike Sowers at STAR Canada on metrics before it really sunk in. Metrics are a bit of a bugaboo in testing these days, but just hold your breath and power through with me for a second. Mike was going over the idea of “Defect Detection Percentage”, which basically just asks what percentage of bugs you caught before releasing. The usefulness of it was that you can probably push it arbitrarily high, so that you catch 99% of bugs before release, but you have to be willing to spend the time to do it. On the other end, maybe your customers are happy with a few bugs if it means they get access to the new features sooner, in which case you can afford to limit the kinds of testing you do. If you maintain an 80% defect detection percentage and still keep your customers happy, it’s not worth the extra time testing it’d take to get that higher. Yes this all depends on how you count bugs, and happiness, and which bugs you’re finding, and maybe you can test better instead of faster, but none of that is the point. This is:

If you drop some aspect of testing and the end result is the same, then it’s probably not worth the effort to do it in the first place.

There are dangers here, of course. You don’t want to drop one kind of testing just because it takes a lot of time if it’s covering a real risk. People will be happy only until that risk manifests itself as a nasty failure in the product. As ever, this is an exercise of balancing concerns.

Being in a bad spot

Part of why this idea stuck with me at the time was that the rocky transition I was going through left me in a pretty bad mental space. I eventually found myself thinking, “Fine, if nobody else cares about following these established test processes like I do, then let everybody else stop doing them and we’ll see how you like it when nothing works anymore.”

This is the cynical way of reading the advice from Janet and Lisa to “let them feel pain” and sit back to “watch the train wreck”. In the wrong work environment you can end up reaching the same conclusion from a place of spite, much like I did. But it doesn’t have to come from a negative place. Mike framed it in terms of metrics and balancing cost and benefit in a way that provided some clarity for an analytical mind like mine, and I think Lisa and Janet are being a bit facetious here deliberately. Now that I’m working in a much more positive space (mentally and corporately) I have a much better way of interpreting this idea: the best motivation for people to change their practices is for them to have reason to change them.

What actually happened for us when we started to drop our old test processes was that everything was more or less fine. The official number of bugs recorded went down overall, but I suspect that was as much a consequence of the change in our reporting habits in small agile teams as anything else. We definitely still pushed bugs into production, but they weren’t dramatically worse than before. What I do know for sure is that nobody came running to me saying “Greg you were right all along, we’re putting too many bugs into production, please save us!”

If that had happened, then great, there would be motivation to change something. But it didn’t, so we turned our attention to other things entirely.

Introducing change (and when not to)

When thinking about this, I kept coming back to two other passages that I had highlighted earlier in the same chapter:

If you are a tester in an organization that has no effective quality philosophy, you probably struggle to get good quality practices accepted. The agile approach will provide you with a mechanism for introducing good quality-oriented practices.

and also

If you’re a tester who is pushing for the team to implement continuous integration, but the programmers simply refuse to try, you’re in a bad spot.

Agile might provide a way of introducing new processes, but it doesn’t mean that anybody is going to want to embrace or even try them. If you have to twist some arms to get a commitment to try something new for even one sprint, if it doesn’t have a positive (or at least neutral) impact you better be prepared to let the team drop it (or convince them that the effects need more time to be seen). If everybody already feels that the deployment process is going swimmingly, why do you need to introduce continuous integration at all?

It might be easy when it’s deciding not to keep something new, but when already established test processes were on the line, this was a very hard thing for me to do. In a lot of ways it was like being forced to admit that we had been wrong about what testing we needed to be doing, even though all of it had been justified at one time or another. We had to realize that certain tests were generating pain for the team, and the only way we could tell if that was really worth it was to drop them and see what happens.

The take away

Today I’m in a much different place. I’m no longer coping with the loss or fragmentation of huge and well established test processes, but rather looking at establishing new processes on my team and those we work with. As tempting as it is to latch onto various testing ideas and “best” practices I hear about, it’s likely wasted effort if I don’t first ask “where are we feeling the most pain?”

Agile Testing book club: Everyone is a Tester

If you’ve even dipped your toe into the online testing community, there’s a good chance that you’ve heard Agile Testing by Janet Gregory and Lisa Crispin as a recommended read. A couple weeks ago I got my hands on a copy and thought it would be a useful exercise to record the highlights of what I learn along the way. There is a lot in here, and I can tell that what resonates will be different depending on where I am mentally at the time. What I highlight will no doubt be just one facet of each chapter, and a different one from what someone else might read.

So, my main highlight from Chapters 1 and 2:

everyone is a tester

Janet and Lisa immediately made an interesting distinction that I would never have thought of before: they don’t use “developer” to refer to the people writing the application code, because everybody on an agile team is contributing to the development. I really like emphasizing this. I’m currently in an environment where we have “developers” writing code and “QA” people doing tests, and even though we’re all working together in an agile way, I can see how those labels can create a divide where there should not be one.

Similarly surprising and refreshing was this:

Some folks who are new to agile perceive it as all about speed. The fact is, it’s all about quality—and if it’s not, we question whether it’s really an “agile” team. (page 16)

The first time I encountered Agile, it was positioned by managers as being all about speed. Project managers (as they were still called) positioned it as all about delivering something of value sooner than possible otherwise, which is still just emphasizing speed in a different way. If asked myself, I probably would have said it was about being agile (i.e., able to adapt to change) because that was the aspect of it that made it worth adopting compared to the environment we worked in before. Saying it’s all about quality? That was new to me, but it made sense immediately, and I love it. Delivering smaller bits sooner is what lets you adapt and change based on feedback, sure, but you do that so you end up with something that everyone is happier with. All of that is about quality.

So now, if everybody on the team should be counted as a developer, and everything about agile is about delivering quality, it makes perfect sense that main drive for everybody on the team should be delivering that quality. The next step is obvious: “Everyone on an agile team is a tester.” Everyone on the team is a developer and everyone on the team is a tester. That includes the customer, the business analysts, the product owners, everybody. Testing has to be everybody’s responsibility for agile-as-quality to work. Otherwise how do you judge the quality of what you’re making? (Yes, the customer might be the final judge of quality means to them, but they can’t be the only tester any more than a tester can be.)

Now, the trick is to take that understanding and help a team to internalize it.