In defence of time over story points

I have to admit, there was a time when I was totally on board with estimating work in “story points”. Briefly I was the resident point-apologist around town, explaining metaphors about how points are like the distance of a race that people complete in different times. These days, while estimating complexity has its uses, I’m coming to appreciate those old fashioned time estimates.

Story points are overrated. Here’s a few of the reasons why I think so. Strap yourselves in, this is a bit of a rant. But don’t worry, I’ll hedge at the end.

The scale is arbitrary and unintuitive

How do you measure complexity? What units do you use? Can you count the number of requirements, the acceptance criteria, the number of changes, the smelliness of the code to be changed, the number of test cases required, or the temperature of the room after the developers have debated the best implementation?

To avoid that question, story points use an arbitrary scale with arbitrary increments. It could be the Fibonacci sequence, powers of two, or just numbers 1 through 5. That itself is not necessarily a problem — Fahrenheit and Celsius are both arbitrary scales that measure something objective — but if you ask 10 developers what a “1” means you’ll get zero answers if they haven’t used points yet and 20 answers 6 months later.

I don’t know anybody who has an intuition for estimating “complexity” because there’s no scale for it. There’s nothing to check it against. Meanwhile we’ve all been developing an intuition for time every since we started asking “are we there yet?” from the back of the car or complaining that it wasn’t late enough for bedtime.

People claim that you can build your scale by taking the simplest task as a “1” and going from there. But complexity doesn’t scale like that. What’s twice as complicated as, say, changing a configuration value? Even if you compare tickets being estimated with previous ones, you’re never going to place it in an ordered list (even if binned) of all previous tickets. You’re guaranteed to have some that are more “complex” than others rated at lower points because you were feeling confident that day or didn’t have a full picture of the work. (Though if you do try this, it can give you the side benefit of questioning whether those old tickets really deserve the points they got.)

It may not be impossible to get a group of people to come to a common intuition around estimating complexity, but it sure takes a lot longer than agreeing on how long a day or a week is. Even if you did reach that common understanding, nobody outside the team will understand it.

Points aren’t what people actually care about

People, be it either the business or dependent teams, need to schedule things. If we want to have goals and try to reach them, we have to have some idea of how much we have to do to get there and how much time it will take to do that work. If someone asks “when can we start work on feature B” and you say “well feature A has 16 points”, their next question is “OK, and how long will that take?” or “and when will it be done?” Points don’t answer either question, and nobody is going to be happy if you tell them the question can’t be answered.

In practice (at least in my experience) people use time anyway. “It’ll only take an hour so I’m giving it one point”. “I’d want to spend a week on this so let’s give it 8 points.” When someone says “This is more complicated so we better give it more points” it’s because they’ll need more time to do it!

Maybe I care about complexity because complexity breeds risk and I’ll need to be more careful testing it. That’s fair, and a decent reason for asking the question, but it also just means you need more time to test it. Complexity is certainly one dimension of that but it isn’t the whole story (impact and probability of risks manifested are others).

Even the whole premise of points, to be able to measure the velocity of a team, admits that time is the important factor. Velocity matters because it tells you how much you work you can reasonably put into your sprint. But given a sprint length you already know how many hours you can fit into a sprint. What’s the point of going around the bush about it?

Points don’t provide feedback

Time provides has a built in feedback that points can’t. That’ll take me less than a day, I say. Two days later we have a problem.

Meanwhile I say something is 16 story points. Two days later it isn’t done… do I care? Am I running behind? What about 4 weeks later? Was this really a 16 point story or not? Oh, actually, someone was expecting it last Thursday? That pesky fourth dimension again!

Points don’t avoid uncertainty

I once heard someone claim that one benefit of story points is that they don’t change when something unexpected comes up. In one sense that’s true, but only if there’s no feedback on the actual value of points. Counterexamples are about as easy to find as stories themselves.

Two systems interact with each other in a way the team didn’t realize. Someone depends on the legacy behaviour so you need to add a migration plan. The library that was going to make the implementation a single line has a bug in it. Someone forgot to mention a crucial requirement. There are new stakeholders that need to be looped in. Internet Explorer is a special snowflake. The list goes on, and each new thing can make something more complex. If they don’t add complexity after you’ve assigned a number, what creates the complexity in the first place?

Sure you try to figure out all aspects of the work as early as possible, maybe even before it gets to the point of estimating for a sprint. Bring in the three amigos! But all the work you do to nail down the “complexity” of a ticket isn’t anything special about “complexity” as a concept, it’s exactly the same kind of work you’d do to refine a time estimate. Neither one has a monopoly on certainty.

Points don’t represent work

One work ticket might require entering configurations for 100 clients for a feature we developed last sprint. It’s dead simple brainless work and there’s minimal risk beyond copy-paste errors that there are protections for anyway. Complexity? Nah, it’s one point, but I’ll spend the whole sprint doing it.

Another work ticket is replacing a legacy piece of code to support an upcoming batch of features. We know the old code tends to be buggy and we’ve been scared to touch it for years because of that. The new version is already designed but it’ll be tricky to plug in and test thoroughly to make sure we don’t break anything in the process. Not a big job—it can still be done in one sprint—but relatively high risk and complex. 20 points.

So wait, if both of those fit in one sprint, why do I care what the complexity is? There are real answers to that, but answering the question of how much work it is isn’t one of them. If you argue that those two examples should have similar complexity since they both take an entire sprint, then you’re already using time as the real estimator and I don’t need to convince you.

Points are easily manipulated

Like any metric, we must question how points can be manipulated and whether there’s incentive to do so.

In order to see increase in velocity, you have to have a really well understood scale. The only way to calibrate that scale without using a measurable unit is to spend months “getting a feel for it”.

Now if you’re looking for ways to increase your velocity, guaranteed the cheapest way to do that (deliberately or not) is to just start assigning more points to things. Now that the team has been at this for a while, one might say, they can better estimate complexity. Fewer unknowns mean more knowns, which are more things to muddy the discussion and push up those complexity estimates. (Maybe you are estimating more accurately, but how can you actually know that?) Voila. Faster velocity brought to you in whole by the arbitrary, immeasurable, and subjective nature of points.

Let’s say we avoid that trap, and we actually are getting better at the work we’re doing. Something that was really difficult six months ago can be handled pretty quickly now without really thinking about it. Is that ticket still as complex as it was six months ago? If the work hasn’t changed it should be, but it sure won’t feel as complex. So is your instinct going to be to put the same points on it? Velocity stagnates even though you’re getting more done. Not only can velocity be manipulated through malice, it doesn’t even correlate with the thing you want to measure!

It’s a feature, not a bug

One argument I still anticipate in favour of points is that the incomprehensibility of them is actually a feature, not a bug. It’s arbitrary on purpose so that it’s harder for people outside the team to translate them into deadlines to be imposed onto that team. It’s a protection mechanism. A secret code among developers to protect their own sanity.

If that’s the excuse, then you’ve got a product management problem, not an estimation problem.

In fact it’s a difficulty with metrics, communication, and overzealous people generally, not something special about time. The further metrics get from the thing they measure, the more likely they are to be misused. Points, if anybody understood them, would be just as susceptible to that.

A final defence of complexity

As far as a replacement for estimating work in time, story points are an almost entirely useless concept that introduces more complexity than it estimates. There’s a lot of jumping through hoops and hand waving to make it look like you’re not estimating things in time anymore. I’d much rather deal in a quantity we actually have units for. I’m tempted to say save yourself the effort, except for one thing: trying to describe the complexity of proposed work is a useful tool for fleshing out what the work actually requires and to get everybody on an equal footing understanding that work. That part doesn’t go away, though the number you assign to it might as well. Just don’t pretend it’s more meaningful than hours on a clock.

How do you know you actually did something?

When I was defending my thesis, one of the most memorable questions I was asked was:

“How do you know you actually did something?”

It was not only an important question about my work at the time, but has also been a very useful question for me in at least two ways. In a practical sense, it comes up as something like “did we actually fix that bug?” More foundationally, it can be as simple as “what is this test for?”

Did it actually work?

The original context was a discussion about the efforts I had gone through to remove sources of noise in my data. As I talked about in my previous post, I was using radio telescopes to measure the hydrogen distribution in the early universe. It was a very difficult measurement because even in the most optimistic models it was expected to be several orders of magnitude dimmer than everything else the telescopes could see. Not only did we have to filter out radio emission from things in the sky we weren’t interested in, it should not be surprising that there’s also a whole lot of radio coming from Earth. Although we weren’t actually looking in the FM band, it would be a lot like trying to pick out some faint static in the background of the local classic rock radio station.

One of the reasons these telescopes were built in rural India was because there was relatively little radio in use in the area. Some of it we couldn’t do anything about, but it turned out that a fair amount of radio noise in the area was accidental. The most memorable example was a stray piece of wire that had somehow been caught on some high voltage power lines and was essentially acting like an antenna leaking power from the lines and broadcasting it across the country.

We developed a way of using the telescopes to scan the horizon and for bright sources of radio signals on the ground and triangulate their location. I actually spent as much time wandering through the countryside with a GPS device in one hand and a radio antenna in the other trying to find these sources. This is what led to what has since become the defining photo of my graduate school experience:

Standing in a field with a radio antenna, next to a cow
Cows don’t care, they have work to do.

Getting back to the question at hand, after spending weeks wandering through fields tightening loose connections, wrapping things in radio shielding, and getting the local power company to clean wires of their transmission lines… did we actually fix anything? Did we reduce the noise in our data? Did we make it easier to see the hydrogen signal we were after?

Did we actually fix the bug?

In many ways, getting rid of those errant radio emitters was like removing bugs in data. Noisy little things that were there only accidentally and that we could blame for at least some of the the noise in our data.

But these weren’t the bugs that you find before you release. Those are the bugs you find because you anticipated that they would come up and planned your testing around it. These things, in contrast, were already out in the field. They were the kinds of bugs that come from the user or something weird someone notices in the logs. You don’t know what caused them, you didn’t uncover them in testing, and you’re not sure at first what is triggering them. These are the ones that are met only with a hunch, an idea that “this might have to do with X” and “we should try doing Y and that might fix it.”

But how do you actually know that you’ve fixed it? Ideally you should have a test for it, but coming up with a new test that will catch a regression and seeing it pass isn’t enough. The test needs to fail. If it doesn’t you can’t show that fixing the bug actually causes it to pass. If you aren’t able to see the issue in the first place, it doesn’t tell you anything if you make a fix and then still don’t see the issue.

For us in the field, the equivalent reproducing the bug was going out with an antenna, pointing it at what we thought was a source, and hearing static on a handheld radio. One step after fixing it (or after the power company told us they fixed it) was to go out with the same antenna as see if the noise had gone away or not. The next step was turning on the antennas and measuring the noise again; push the fix to production and see what happens.

What is this test for?

Where this can go wrong — whether you know there’s a bug there or not — is when you have a test that doesn’t actually test anything useful. The classic example is an automated test that doesn’t actually check anything, but it can just as easily be the test that checks the wrong thing, the test that doesn’t check what it claims, or even the test that doesn’t check anything different from another one.

To me this is just like asking “did you actually do something”, because running tests that don’t actually check anything useful don’t do anything useful. If your tests don’t fail when there are bugs, then your tests don’t work.

In a world where time is limited and context is king, whether you can articulate what a test is for can be a useful heuristic for deciding whether or not something is worth doing. It’s much trickier than knowing whether you fixed a specific bug, though. We could go out into the field and hear clearly on the radio that a noise source had been fixed, but it was much harder to answer whether it paid off for our research project overall. Was it worth investing more time into it or not?

How do you know whether a test is worth doing?