Respecting the house rules

I just watched Angie Jones’s keynote from this year’s Selenium Conference using elements of game design to talk about test automation. I really like what she said about the rules of the game. Broadly, there are the rules that are part of the game—the best practices and patterns that are well established parts of the trade—but there’s also the house rules.

These are the rules that you make up with your team members… In the automation game, we’re not really accepting of the house rules. People judge you. They think that you’re clueless if you tell them you’re playing by the house rules. I don’t like this. Can we stop doing this? Let’s stop judging people and assuming that everyone is playing with all of the game pieces and that they can do everything by these outside-the-box rules that technically we made up ourselves. Can we allow people to implement some cultural variances to their game to get ahead? People out here are doing the best that they can.

— Angie Jones at SeConf 2018

She goes on to tell an example about a team that knows they aren’t doing things by the book, but not for lack of trying. People have to work with what they’ve got. And yes, they can do their best to change it, but even if change is possible it doesn’t happen over night and you can’t build perfect (if you even know what perfect would be) over night.

This is good to remember not just for talking about how other people approach test automation, but within our own teams as well, and any time you’re looking at the work of another person. Many times I’ve heard someone complain about how insane or ugly or ridiculous some piece of code is, without considering why it might have made sense to write it like that in the first place. Why it might still make sense to write it like that. No design is sacred, but Angie also makes the point later in her talk that we shouldn’t just be pointing out how not to do things, we should talk about how we can do better. Simply declaring how awful something is showcases the ignorance of the person complaining at least as often as the original author’s.

Worth repeating: “People are out here doing the best they can.”

The full video is below. The house rules part is at about 24:20.

Rethinking velocity

I’ve been thinking about the concept of “velocity” in software development the last few days, in response to a couple cases recently where I’ve seen people express dislike for it. I can understand why, and the more I think about it the more I think that the concept does more harm than good.

When I was first introduced to the concept it was with the analogy of running a race. To win, a runner wants to increase their velocity, running the same distance in a shorter amount of time. Even though the distance is the same each time they run the race, with practice the runner can complete it faster and faster.

The distance run, in the software development analogy, is the feature completed. In Scrum, velocity is the measure how many story points a team completes in a sprint. Individual pieces of work are sized by their “complexity”, so with practice, a team should be able to increase their velocity by finishing work of a given complexity in less time. I have trouble with this first because story points are problematic at best, so any velocity you try to calculate will be easily manipulated. Since I’ve gotten into trouble with the Scrum word police before, I’m going to put that aside for a moment and say that the units you use don’t matter for what I’m talking about.

It should be fair to say that increasing velocity as Scrum defines it is about being able to do more complex work within a sprint without actually doing more work (more time, more effort), because the team gets better at doing that work. (This works both for a larger amount of a fixed complexity of work, or a sprint’s worth of work that is more complex than could have been done in previous sprints.) Without worry about some nebulous “points”, the concept is still about being do more than you could before in a fixed amount of time.

But that’s not what people actually hear when you say we need to “increase velocity”.

Rather, it feels like being asked to do the work faster and faster. Put the feature factory at full steam! You need to work faster, you need to get more done, you need to be able to finish any feature in less than two weeks. Asking how you can increase velocity doesn’t ask “how can we make this work easier?” It asks, “why is this taking so long?” It feels like a judgement, and so we react negatively to it.

While it certainly does make sense to try to make repeated work easier with each iteration, I don’t think that should be the goal of a team. The point of being agile as I’ve come to understand it (and I’ll go with a small “a” here again to avoid the language police) is to be flexible to change by encouraging shorter feedback cycles, which itself is only possible by delivering incrementally to customers earlier than if we worked in large process-heavy single-delivery projects.

Building working-ish versions of a widget and delivering incremental improvements more often might take longer to get to the finished widget, but with the constant corrections along the way the end result should actually be better than it otherwise would have been. And, of course, if the earlier iterations can meet some level of the customer’s need, then they starting getting value far sooner in the process as well. The complexity of the widget doesn’t change, but I’d be happy to take the lower velocity for a higher quality product.

I’m bringing it back to smaller increments and getting feedback because one of the conversations that led to this thinking was about whether putting our products in front of users sooner was the same as asking for increased velocity. Specifically, I said “If you aren’t putting your designs in front of users, you’re wasting your time.” In a sense, I am asking for something to be faster, and going faster means velocity, so these concepts get conflated.

The “velocity” I care about isn’t the number of points done in a given time frame (or number of stories, or number of any kind of deliverable.) What I care about is, how many feedback points did I get in that time? How many times did we check our assumptions? How sure are we that the work we’ve been doing is actually going to provide the value needed? Maybe “feedback frequency” is what we should be talking about.

A straight line from start to finish for a completed widget with feedback at the start and env, vs a looping line with seven points of feedback but longer to get to the end.
And this is generously assuming you have a good idea of what needs to be built in the first place.

Importantly, I’m not necessarily talking about getting feedback on a completed (or prototype) feature delivered to production. Much like I argued that you can demo things that aren’t done, there is information to be gained at every stage of development, from initial idea through design to prototypes and final polish. I’ve always been an information junkie, so I see any information about the outside world, be it anecdotal oddities to huge statistical models of tracking behaviours in your app. Even just making observations about the world, learning about your intended users’ needs before you know what to offer them, all feeds into this category. Too often this happens only once at the outset. A second time when all else is said and done if you’re lucky. I’m not well versed in the design and user experience side of thing yet, but I wager even the big picture, blue-sky, big steps and exploration we might want to do can still be checked against the outside world more often than most people think.

Much like “agile” and “automation“, the word “velocity” itself has become a distraction. People associate it with the sense of wanting to do the same thing faster and faster. What I actually want is to do things in smaller chunks, more often. Higher frequency adjustments to remain agile and build better products, not just rushing for the finish line.

Demo things that aren’t done

When I saw this tweet from John Cutler about demos:

I immediately composed this response, without even thinking about it:

I hesitated to say that I wrote it “without thinking” right now because apparently quite a few people agree in a much different sense. This described a natural and desired state of affairs for me, a natural extension of John’s tweet. It wasn’t a new or difficult idea for me. The internet, however, seemed to think I was an idiot.

Responses ranged from by-the-book “that isn’t how Scrum defines it”:

to pure disbelief that I would utter something so ignorant:

You know you’ve made it on Twitter when someone calls you inexperienced and ignorant!

One person even said it felt like a personal attack, though I have no idea why. Perhaps she’s also a Scrum Guide Fundamentalist? (Alas, Twitter makes it quite hard to find quote-tweets after the fact, so I can’t follow up.)

I thought I should take a few minutes to explain a bit more about what I meant.

Demos aren’t just for Scrum

This is the obvious one. As far as I know Scrum doesn’t own a trademark on the word “demo”. A follow up from Oluf Nissen said we shouldn’t be using a word from Scrum if we meant something else. We at least needed to start with The One True Scrum. Well, I did. I did that style of demos when I first started in software development, both before and after adopting Agile, and I’ve been in demos again recently that took that approach: Strictly at the end of the sprint, demo what is “done”. It was a dreadful approach for us because it delayed feedback and hid the progress on everything that wasn’t “done” (whatever that means). I have yet to see a development team where that approach works. We started with it, yes, but then moved on.

Demo as soon as you can use feedback

I think John’s original point touched on half of what I saw as the problem with this kind of demo. As I heard Liz Keogh say recently, knowledge goes stale faster than tickets on a board do. If I finish something small on the 3rd day of a 10 day sprint, I’m likely to not even remember that I did it by the time the official demo comes along. For something bigger, if I finish it on the 7th day and put it aside to work on something else for three days, there’s a much higher cost to go back to it than if I had feedback right away. So we should demo work when the work is done, not when a sprint is done.

The other aspect that I added—demoing stuff that isn’t done—stems from the same reason. If there is a demo meeting scheduled, I want to use that time for getting feedback on as many things as possible. There’s a priority to this, of course: stuff that is actually ready to be deployed probably goes first. But there is a spectrum. If I have a working prototype that isn’t production-ready yet, I can still throw it up on the screen to get feedback on how it works and whether there are any obvious complications that I’ve missed. Even earlier, I can go through the approach I’m planning to get feedback on the design before writing any code. I can demo research results, proposed tools or libraries, or code that someone wrote for something else that might be related.

One response on Twitter did touch on this aspect:

Any work I did this sprint, no matter what state it is in, is something I can demo. It’s something I should demo. In my experience there’s very little work that can’t benefit from taking this step, and if it isn’t the right audience to provide feedback then why are they there? The moment something is “done” is actually the worst time for feedback. Once it makes that transition, even if it’s just a mental re-categorization, there’s more inertia against change. Feedback while something is still in flux is like steering a moving car. Feedback after that car has stopped means starting the engine again and putting it in reverse.

Do this very, very often

Someone else suggested doing this only “very very rarely”, lest you build a reputation for vaporware. If I had to bet, I’d guess we were talking about different contexts. So let’s clear that up. I’m not talking about doing this sort of thing in a sales pitch. I’m not a sales guy, but demonstrating features that don’t actually exist yet to customers in order to sell them on a product does seem like a bad idea.

Rather, this is a demo to help answer the question “are we building the right thing?” That’s a question that should be asked early and often. The difference is both in the audience and in being clear about what you’re demonstrating. Is this the right audience to answer that question? And how are they likely to interpret the information you’re giving them?

In the context of active agile development, the time from a demo to a feature in production should be small, even if the demo happens early in development. The longer that lead time is, the more likely a perception of “vapourware” could become, but the blame for that rests with the long wait time itself. Again this was a quote-tweet so I can’t follow up further, but to me you’d have the same risk just from being transparent about when work on a feature started. Have a long enough cycle time and by the time the feature is deployed people will stop believing it will ever arrive. The solution is to fix (or justify) the long time, not hide it.

Demo things that aren’t done

Demos are for feedback, so don’t limit yourself to work that is “done” and therefore least easily changed. Make sure you have an audience for demos that can provide the feedback you need, and solicit that feedback often. Demo things that aren’t done.

Testability as observability and the Accessibility Object Model

I attended a talk today by Rob Dodson on some proposals for the Accessibility Object Model that are trying to add APIs for web developers to more easily manipulate accessibility features of their apps and pages. Rob went through quite a few examples of the benefits the proposed APIs would bring, from simple type checking to defining semantics of custom elements and user actions. Unsurprisingly, the one use case that stuck out for me was making the accessibility layer testable.

All your automated tests should fail

A crucial little caveat to my statement that automated tests aren’t automated if they don’t run automaticallyAll your automated tests should fail.

… at least once, anyway.

In the thick of implementing a new feature and coding up a bunch of tests for it, it’s easy to forget to run each one to make sure they’re actually testing something. It’s great to work in an environment where all you have to do is type make test or npm run test and see some green. But if you only ever see green, you’re missing a crucial step.