Rethinking velocity

I’ve been thinking about the concept of “velocity” in software development the last few days, in response to a couple cases recently where I’ve seen people express dislike for it. I can understand why, and the more I think about it the more I think that the concept does more harm than good.

When I was first introduced to the concept it was with the analogy of running a race. To win, a runner wants to increase their velocity, running the same distance in a shorter amount of time. Even though the distance is the same each time they run the race, with practice the runner can complete it faster and faster.

The distance run, in the software development analogy, is the feature completed. In Scrum, velocity is the measure how many story points a team completes in a sprint. Individual pieces of work are sized by their “complexity”, so with practice, a team should be able to increase their velocity by finishing work of a given complexity in less time. I have trouble with this first because story points are problematic at best, so any velocity you try to calculate will be easily manipulated. Since I’ve gotten into trouble with the Scrum word police before, I’m going to put that aside for a moment and say that the units you use don’t matter for what I’m talking about.

It should be fair to say that increasing velocity as Scrum defines it is about being able to do more complex work within a sprint without actually doing more work (more time, more effort), because the team gets better at doing that work. (This works both for a larger amount of a fixed complexity of work, or a sprint’s worth of work that is more complex than could have been done in previous sprints.) Without worry about some nebulous “points”, the concept is still about being do more than you could before in a fixed amount of time.

But that’s not what people actually hear when you say we need to “increase velocity”.

Rather, it feels like being asked to do the work faster and faster. Put the feature factory at full steam! You need to work faster, you need to get more done, you need to be able to finish any feature in less than two weeks. Asking how you can increase velocity doesn’t ask “how can we make this work easier?” It asks, “why is this taking so long?” It feels like a judgement, and so we react negatively to it.

While it certainly does make sense to try to make repeated work easier with each iteration, I don’t think that should be the goal of a team. The point of being agile as I’ve come to understand it (and I’ll go with a small “a” here again to avoid the language police) is to be flexible to change by encouraging shorter feedback cycles, which itself is only possible by delivering incrementally to customers earlier than if we worked in large process-heavy single-delivery projects.

Building working-ish versions of a widget and delivering incremental improvements more often might take longer to get to the finished widget, but with the constant corrections along the way the end result should actually be better than it otherwise would have been. And, of course, if the earlier iterations can meet some level of the customer’s need, then they starting getting value far sooner in the process as well. The complexity of the widget doesn’t change, but I’d be happy to take the lower velocity for a higher quality product.

I’m bringing it back to smaller increments and getting feedback because one of the conversations that led to this thinking was about whether putting our products in front of users sooner was the same as asking for increased velocity. Specifically, I said “If you aren’t putting your designs in front of users, you’re wasting your time.” In a sense, I am asking for something to be faster, and going faster means velocity, so these concepts get conflated.

The “velocity” I care about isn’t the number of points done in a given time frame (or number of stories, or number of any kind of deliverable.) What I care about is, how many feedback points did I get in that time? How many times did we check our assumptions? How sure are we that the work we’ve been doing is actually going to provide the value needed? Maybe “feedback frequency” is what we should be talking about.

A straight line from start to finish for a completed widget with feedback at the start and env, vs a looping line with seven points of feedback but longer to get to the end.
And this is generously assuming you have a good idea of what needs to be built in the first place.

Importantly, I’m not necessarily talking about getting feedback on a completed (or prototype) feature delivered to production. Much like I argued that you can demo things that aren’t done, there is information to be gained at every stage of development, from initial idea through design to prototypes and final polish. I’ve always been an information junkie, so I see any information about the outside world, be it anecdotal oddities to huge statistical models of tracking behaviours in your app. Even just making observations about the world, learning about your intended users’ needs before you know what to offer them, all feeds into this category. Too often this happens only once at the outset. A second time when all else is said and done if you’re lucky. I’m not well versed in the design and user experience side of thing yet, but I wager even the big picture, blue-sky, big steps and exploration we might want to do can still be checked against the outside world more often than most people think.

Much like “agile” and “automation“, the word “velocity” itself has become a distraction. People associate it with the sense of wanting to do the same thing faster and faster. What I actually want is to do things in smaller chunks, more often. Higher frequency adjustments to remain agile and build better products, not just rushing for the finish line.

Testability as observability and the Accessibility Object Model

I attended a talk today by Rob Dodson on some proposals for the Accessibility Object Model that are trying to add APIs for web developers to more easily manipulate accessibility features of their apps and pages. Rob went through quite a few examples of the benefits the proposed APIs would bring, from simple type checking to defining semantics of custom elements and user actions. Unsurprisingly, the one use case that stuck out for me was making the accessibility layer testable.

In defence of time over story points

I have to admit, there was a time when I was totally on board with estimating work in “story points”. Briefly I was the resident point-apologist around town, explaining metaphors about how points are like the distance of a race that people complete in different times. These days, while estimating complexity has its uses, I’m coming to appreciate those old fashioned time estimates.

Story points are overrated. Here’s a few of the reasons why I think so. Strap yourselves in, this is a bit of a rant. But don’t worry, I’ll hedge at the end.

The scale is arbitrary and unintuitive

How do you measure complexity? What units do you use? Can you count the number of requirements, the acceptance criteria, the number of changes, the smelliness of the code to be changed, the number of test cases required, or the temperature of the room after the developers have debated the best implementation?

To avoid that question, story points use an arbitrary scale with arbitrary increments. It could be the Fibonacci sequence, powers of two, or just numbers 1 through 5. That itself is not necessarily a problem — Fahrenheit and Celsius are both arbitrary scales that measure something objective — but if you ask 10 developers what a “1” means you’ll get zero answers if they haven’t used points yet and 20 answers 6 months later.

I don’t know anybody who has an intuition for estimating “complexity” because there’s no scale for it. There’s nothing to check it against. Meanwhile we’ve all been developing an intuition for time every since we started asking “are we there yet?” from the back of the car or complaining that it wasn’t late enough for bedtime.

People claim that you can build your scale by taking the simplest task as a “1” and going from there. But complexity doesn’t scale like that. What’s twice as complicated as, say, changing a configuration value? Even if you compare tickets being estimated with previous ones, you’re never going to place it in an ordered list (even if binned) of all previous tickets. You’re guaranteed to have some that are more “complex” than others rated at lower points because you were feeling confident that day or didn’t have a full picture of the work. (Though if you do try this, it can give you the side benefit of questioning whether those old tickets really deserve the points they got.)

It may not be impossible to get a group of people to come to a common intuition around estimating complexity, but it sure takes a lot longer than agreeing on how long a day or a week is. Even if you did reach that common understanding, nobody outside the team will understand it.

Points aren’t what people actually care about

People, be it either the business or dependent teams, need to schedule things. If we want to have goals and try to reach them, we have to have some idea of how much we have to do to get there and how much time it will take to do that work. If someone asks “when can we start work on feature B” and you say “well feature A has 16 points”, their next question is “OK, and how long will that take?” or “and when will it be done?” Points don’t answer either question, and nobody is going to be happy if you tell them the question can’t be answered.

In practice (at least in my experience) people use time anyway. “It’ll only take an hour so I’m giving it one point”. “I’d want to spend a week on this so let’s give it 8 points.” When someone says “This is more complicated so we better give it more points” it’s because they’ll need more time to do it!

Maybe I care about complexity because complexity breeds risk and I’ll need to be more careful testing it. That’s fair, and a decent reason for asking the question, but it also just means you need more time to test it. Complexity is certainly one dimension of that but it isn’t the whole story (impact and probability of risks manifested are others).

Even the whole premise of points, to be able to measure the velocity of a team, admits that time is the important factor. Velocity matters because it tells you how much you work you can reasonably put into your sprint. But given a sprint length you already know how many hours you can fit into a sprint. What’s the point of going around the bush about it?

Points don’t provide feedback

Time provides has a built in feedback that points can’t. That’ll take me less than a day, I say. Two days later we have a problem.

Meanwhile I say something is 16 story points. Two days later it isn’t done… do I care? Am I running behind? What about 4 weeks later? Was this really a 16 point story or not? Oh, actually, someone was expecting it last Thursday? That pesky fourth dimension again!

Points don’t avoid uncertainty

I once heard someone claim that one benefit of story points is that they don’t change when something unexpected comes up. In one sense that’s true, but only if there’s no feedback on the actual value of points. Counterexamples are about as easy to find as stories themselves.

Two systems interact with each other in a way the team didn’t realize. Someone depends on the legacy behaviour so you need to add a migration plan. The library that was going to make the implementation a single line has a bug in it. Someone forgot to mention a crucial requirement. There are new stakeholders that need to be looped in. Internet Explorer is a special snowflake. The list goes on, and each new thing can make something more complex. If they don’t add complexity after you’ve assigned a number, what creates the complexity in the first place?

Sure you try to figure out all aspects of the work as early as possible, maybe even before it gets to the point of estimating for a sprint. Bring in the three amigos! But all the work you do to nail down the “complexity” of a ticket isn’t anything special about “complexity” as a concept, it’s exactly the same kind of work you’d do to refine a time estimate. Neither one has a monopoly on certainty.

Points don’t represent work

One work ticket might require entering configurations for 100 clients for a feature we developed last sprint. It’s dead simple brainless work and there’s minimal risk beyond copy-paste errors that there are protections for anyway. Complexity? Nah, it’s one point, but I’ll spend the whole sprint doing it.

Another work ticket is replacing a legacy piece of code to support an upcoming batch of features. We know the old code tends to be buggy and we’ve been scared to touch it for years because of that. The new version is already designed but it’ll be tricky to plug in and test thoroughly to make sure we don’t break anything in the process. Not a big job—it can still be done in one sprint—but relatively high risk and complex. 20 points.

So wait, if both of those fit in one sprint, why do I care what the complexity is? There are real answers to that, but answering the question of how much work it is isn’t one of them. If you argue that those two examples should have similar complexity since they both take an entire sprint, then you’re already using time as the real estimator and I don’t need to convince you.

Points are easily manipulated

Like any metric, we must question how points can be manipulated and whether there’s incentive to do so.

In order to see increase in velocity, you have to have a really well understood scale. The only way to calibrate that scale without using a measurable unit is to spend months “getting a feel for it”.

Now if you’re looking for ways to increase your velocity, guaranteed the cheapest way to do that (deliberately or not) is to just start assigning more points to things. Now that the team has been at this for a while, one might say, they can better estimate complexity. Fewer unknowns mean more knowns, which are more things to muddy the discussion and push up those complexity estimates. (Maybe you are estimating more accurately, but how can you actually know that?) Voila. Faster velocity brought to you in whole by the arbitrary, immeasurable, and subjective nature of points.

Let’s say we avoid that trap, and we actually are getting better at the work we’re doing. Something that was really difficult six months ago can be handled pretty quickly now without really thinking about it. Is that ticket still as complex as it was six months ago? If the work hasn’t changed it should be, but it sure won’t feel as complex. So is your instinct going to be to put the same points on it? Velocity stagnates even though you’re getting more done. Not only can velocity be manipulated through malice, it doesn’t even correlate with the thing you want to measure!

It’s a feature, not a bug

One argument I still anticipate in favour of points is that the incomprehensibility of them is actually a feature, not a bug. It’s arbitrary on purpose so that it’s harder for people outside the team to translate them into deadlines to be imposed onto that team. It’s a protection mechanism. A secret code among developers to protect their own sanity.

If that’s the excuse, then you’ve got a product management problem, not an estimation problem.

In fact it’s a difficulty with metrics, communication, and overzealous people generally, not something special about time. The further metrics get from the thing they measure, the more likely they are to be misused. Points, if anybody understood them, would be just as susceptible to that.

A final defence of complexity

As far as a replacement for estimating work in time, story points are an almost entirely useless concept that introduces more complexity than it estimates. There’s a lot of jumping through hoops and hand waving to make it look like you’re not estimating things in time anymore. I’d much rather deal in a quantity we actually have units for. I’m tempted to say save yourself the effort, except for one thing: trying to describe the complexity of proposed work is a useful tool for fleshing out what the work actually requires and to get everybody on an equal footing understanding that work. That part doesn’t go away, though the number you assign to it might as well. Just don’t pretend it’s more meaningful than hours on a clock.

Qualifying quantitative risk

Let’s start with quantifying qualitative risk first.

Ages ago I was under pressure from some upper management to justify timelines, and I found a lot of advice about using risk as a tool not only to help managers see what they’re getting from the time spent developing a feature (i.e, less risk) but also to help focus what testing you’re doing. This was also coming hand in hand with a push to loosen up our very well defined test process, which came out of very similar pressure. I introduced the concept of a risk assessment matrix as a way of quantifying risk, and it turned out to be a vital tool for the team in planning our sprints.

Five by five

I can’t the original reference I used to base my version from, because if you simply google “risk assessment matrix” you’ll find dozens of links describing the basic concept. The basic concept is this:

Rate the impact (or consequence) of something going wrong on a scale of 1 to 5, with 1 being effectively unnoticeable 5 being catastrophic.  Rate the likelihood (or probability) of something bad happening from 1 to 5, with 1 being very unlikely and 5 being almost certain. Multiply those together and you get a number that represents how risky it is on a scale from 1 to 25.

a 5x5 multiplication table, with low numbers labelled minimal risk and the highest numbers labelled critical risk

How many ambiguities and room for hand waving can you spot already?

Risk is not objective

One of the biggest problems with a system like this is that there’s a lot of room for interpreting what these scales mean. The numbers 1 to 5 are completely arbitrary so we have to attach some meaning to them. Even the Wikipedia article on risk matrices eschews numbers entirely, using instead qualitative markers laid out in a 5×5 look-up table.

The hardest part of this for me and the team was dealing with the fact that neither impact nor probability are the same for everybody. For impact, I used three different scales to illustrate how different people might react based on impact:

To someone working in operations:

  1. Well that’s annoying
  2. This isn’t great but at least it’s manageable
  3. At least things are only broken internally
  4. People are definitely going to notice something is wrong
  5. Everything is on fire!

To our clients:

  1. It’s ok if it doesn’t work, we’re not using it
  2. It works for pretty much everything except…
  3. I guess it’ll do but let’s make it better
  4. This doesn’t really do what I wanted
  5. This isn’t at all what I asked for

And to us, the developers:

  1. Let’s call this a “nice-to-have” and fix it when there’s time
  2. We’ll put this on the roadmap
  3. We’ll bump whatever was next and put it in the next sprint
  4. We need to get someone on this right away
  5. We need to put everything on this right now

You could probably also frame these as performance impact, functional impact, and project impact. Later iterations adjusted the scales a bit and put in more concrete examples; anything that resulted in lost data for a client, for example, was likely to fall into the maximum impact bucket.

Interestingly, in a recent talk Angie Jones extended the basic idea of a 5×5 to include a bunch of other qualities as a way of deciding whether a test is worth automating. In her scheme, she uses “how quickly would this be fixed” as one dimension of the value of a test, whereas I’m folding that into the impact on the development team. I hadn’t seen other variations of the 5×5 matrix when coming up with these scales, and to me the most direct way of making a developer feel the impact of a bug was to ask whether they’d have to work overtime to fix it.

Probability was difficult in its own way as well. We eventually adopted a scale with each bucket mapping to a ballpark percentage chance of a bug being noticed, but even a qualitative scale from “rare” through to “certain” misses a lot of nuance. How do you compare something that will certainly be noticed by only one client to something that low chance of manifesting for every client? I can’t say we ever solidified a good solution to this, but we got used to whatever our de-facto scale was.

How testing factors in

We discussed the ratings we wanting to give each ticket on impact and probability of problems at the beginning of each sprint. These discussions would surface all kinds of potential bugs, known troublesome areas, unanswered questions, and ideas of what kind of testing needed to be done.

Inevitably, when somebody explained their reasoning for assigning a higher impact than someone else by raising a potential defect, someone else would say “oh, but that’s easy to test for.” This was great—everybody’s thinking about testing!—but it also created a tendency to downplay the risk. Since a lower risk item should do with less thorough testing, we might not plan to do the testing required to justify the low risk. Because of that, we added a caveat to our estimates: we estimated what the risk would be if we did no testing beyond, effectively, turning the thing on.

With that in mind, a risk of 1 could mean that one quick manual test would be enough to send it out the door. The rare time something was rated as high as 20 or 25, I would have a litany of reasons sourced from the team as to why we were nervous about it and what we needed to do to mitigate that. That number assigned to “risk” at the end of the day became a useful barometer for whether the amount of testing we planned to do was reasonable.

Beyond testing

Doing this kind of risk assessment had positive effects outside of calibrating our testing. The more integrated testing and development became, the more clear it was that management couldn’t just blame testing for long timelines on some of these features. I deliberately worked this into how I wanted the risk scale to be interpreted, so that it spoke to both design and testing:

Risk  Interpretation
1-4 Minimal: Can always improve later, just test the basics.
5-10 Moderate: Use a solution that works over in-depth studies, test realistic edge cases, and keep estimates lean.
12-16 Serious: Careful design, detailed testing on edges and corners, and detailed estimates on any extra testing beyond the norm.
20-25 Critical: In-depth studies, specialized testing, and conservative estimates.

These boundaries are always fuzzy, of course, and this whole thing has to be evaluated in context. Going back to Angie Jones’s talk, she uses four of these 5×5 grids to get a score out of 100 for whether a test should be automated, and the full range from 25-75 only answers that question with “possibly”. I really like how she uses this kind of system as a comparison against her “gut check”, and my team used this in much the same way.

The end result

Although I did all kinds of fun stuff with comparing these risk estimates against the story points  we put on them, the total time spent on the ticket, and whether we were spending a reasonable ratio of time on test to development, none of that ever saw practical use beyond “hmmm, that’s kind of interesting” or “yeah that ticket went nuts”. Even though I adopted this tool as a way of responding to pressure from management to justify timelines, they (thankfully) rarely ended up asking for these metrics either. Once a ticket was done and out the door, we rarely cared about what our original risk estimate was.

On the other side, however, I credit these (sometimes long) conversations with how smoothly the rest of our sprints would go; everybody not only had a good understanding of what exactly needed to be done and why, but we arrived at that understanding as a group. We quantified risk to put a number into a tracking system, but the qualitative understanding of what that number meant is where the value lay.

Agile Testing book club: Let them feel pain

This is the second part is a series of exercises where I highlight one detail from a chapter or two of Agile Testing by Janet Gregory and Lisa Crispin. Part one of the series can be found here. This installment comes from Chapter 3.

Let them feel pain

This chapter is largely about making the transition into agile workflows, and the growing pains that can come from that. I’ve mentioned before on this blog that when I went through that transition, I worried about maintaining the high standard of testing that we had in place. The book is coming from a slightly different angle, of trying to overcome reluctance to introducing good quality practices, but the idea is the same. This is the sentence that stuck out most to me in the whole chapter:

Let them feel pain: Sometimes you just have to watch the train wreck.

I did eventually learn this lesson, though it took probably 6 months of struggling against the tide and a tutorial session by Mike Sowers at STAR Canada on metrics before it really sunk in. Metrics are a bit of a bugaboo in testing these days, but just hold your breath and power through with me for a second. Mike was going over the idea of “Defect Detection Percentage”, which basically just asks what percentage of bugs you caught before releasing. The usefulness of it was that you can probably push it arbitrarily high, so that you catch 99% of bugs before release, but you have to be willing to spend the time to do it. On the other end, maybe your customers are happy with a few bugs if it means they get access to the new features sooner, in which case you can afford to limit the kinds of testing you do. If you maintain an 80% defect detection percentage and still keep your customers happy, it’s not worth the extra time testing it’d take to get that higher. Yes this all depends on how you count bugs, and happiness, and which bugs you’re finding, and maybe you can test better instead of faster, but none of that is the point. This is:

If you drop some aspect of testing and the end result is the same, then it’s probably not worth the effort to do it in the first place.

There are dangers here, of course. You don’t want to drop one kind of testing just because it takes a lot of time if it’s covering a real risk. People will be happy only until that risk manifests itself as a nasty failure in the product. As ever, this is an exercise of balancing concerns.

Being in a bad spot

Part of why this idea stuck with me at the time was that the rocky transition I was going through left me in a pretty bad mental space. I eventually found myself thinking, “Fine, if nobody else cares about following these established test processes like I do, then let everybody else stop doing them and we’ll see how you like it when nothing works anymore.”

This is the cynical way of reading the advice from Janet and Lisa to “let them feel pain” and sit back to “watch the train wreck”. In the wrong work environment you can end up reaching the same conclusion from a place of spite, much like I did. But it doesn’t have to come from a negative place. Mike framed it in terms of metrics and balancing cost and benefit in a way that provided some clarity for an analytical mind like mine, and I think Lisa and Janet are being a bit facetious here deliberately. Now that I’m working in a much more positive space (mentally and corporately) I have a much better way of interpreting this idea: the best motivation for people to change their practices is for them to have reason to change them.

What actually happened for us when we started to drop our old test processes was that everything was more or less fine. The official number of bugs recorded went down overall, but I suspect that was as much a consequence of the change in our reporting habits in small agile teams as anything else. We definitely still pushed bugs into production, but they weren’t dramatically worse than before. What I do know for sure is that nobody came running to me saying “Greg you were right all along, we’re putting too many bugs into production, please save us!”

If that had happened, then great, there would be motivation to change something. But it didn’t, so we turned our attention to other things entirely.

Introducing change (and when not to)

When thinking about this, I kept coming back to two other passages that I had highlighted earlier in the same chapter:

If you are a tester in an organization that has no effective quality philosophy, you probably struggle to get good quality practices accepted. The agile approach will provide you with a mechanism for introducing good quality-oriented practices.

and also

If you’re a tester who is pushing for the team to implement continuous integration, but the programmers simply refuse to try, you’re in a bad spot.

Agile might provide a way of introducing new processes, but it doesn’t mean that anybody is going to want to embrace or even try them. If you have to twist some arms to get a commitment to try something new for even one sprint, if it doesn’t have a positive (or at least neutral) impact you better be prepared to let the team drop it (or convince them that the effects need more time to be seen). If everybody already feels that the deployment process is going swimmingly, why do you need to introduce continuous integration at all?

It might be easy when it’s deciding not to keep something new, but when already established test processes were on the line, this was a very hard thing for me to do. In a lot of ways it was like being forced to admit that we had been wrong about what testing we needed to be doing, even though all of it had been justified at one time or another. We had to realize that certain tests were generating pain for the team, and the only way we could tell if that was really worth it was to drop them and see what happens.

The take away

Today I’m in a much different place. I’m no longer coping with the loss or fragmentation of huge and well established test processes, but rather looking at establishing new processes on my team and those we work with. As tempting as it is to latch onto various testing ideas and “best” practices I hear about, it’s likely wasted effort if I don’t first ask “where are we feeling the most pain?”