The Phoenix Project & the value of anecdotes

Book cover of The Phoenix ProjectI generally have a hard time reading non-fiction books. Though I always love learning, I rarely find them engaging enough that I look forward to continuing them the way I do with a novel. If the topic is related to work, I usually find myself wanting to take notes, but I usually only make time for reading in bed as I’m going to sleep. Even if I wasn’t going to take notes, I don’t like keeping my mind on work 24/7. Downing a cup of coffee just before bed has about the same effect as starting to contemplate “software testing this”, “Agile that”, and “product management the other”.

That’s why I was interested to give The Phoenix Project a try. It’s billed as a novel about DevOps, and was recommended by a coworker as providing some interesting insights. I decided to approach it like a novel, without thinking too much about it being about work-related, and put it on my nightstand.

The book starts with Bill, the IT manager of a struggling manufacturing company, and follows his attempts to make improvements to how IT is handled. I liked it immediately because the worst-case scenario that Bill starts the story in was eerily familiar to some of my own experience in less-than-healthy organizations.

The book reads quite well, though there are times where it feels more like a contrived example from a management textbook than a novel. This is especially true whenever the character Erik appears. His primary role is to show Bill the error of his ways, usually by pointing out that IT work—development and operations—are more like manufacturing he would previously have admitted. This really strains credulity when Erik gives Bill challenges like “call me when you figure out what the four kinds of work are.” As if there is a unique way of describing types of work with exactly and only four categories. But Bill does figure them out, correctly, and the plot moves on. Erik’s “Three Ways” were given a similar treatment. There are a lot of good lessons in this book, but I doubt anybody who’s been around the block a few times will believe how naturally Bill divines these concepts from first principles. Nor how easily he puts them into practice to turn the whole IT culture around, with just one token protest character that gets quickly sidelined.

Nonetheless, I do think the book achieved what it set out to do. The two major themes are worth outlining.

Four types of work

Early on Bill talks about how he needs to get “situational awareness” of where the problems are. One of the tools he arrives at, with Erik’s prompting, is visualizing the work his department is doing, which makes it obvious that there are types of work that had previously been invisible. The categories he identifies (after Erik told him that there were exactly four) are:

  1. Business Projects – the stuff that pays the bills.
  2. Internal IT Projects – Infrastructure changes, improvements, and technical debt would fall here.
  3. Changes – “generated from the previous two”, though honestly I don’t see why this exists independently of the first two.
  4. Unplanned work – arising from incidents or problems that need to be addressed at the expense of other work.

Certainly I like the lesson of visualizing all the work you’re doing and identifying where it comes from, but I don’t understand this particular breakdown. Is a “change” to an existing system really a whole separate category of work from a new project? What value does separating that provide? If the distinction between business and IT projects is significant, why isn’t there a difference between changes to business features and internal IT changes?

It wasn’t clear to me if these four types of work are something that can be found in the literature, where there might be some more justification for them, or if they are an invention of the authors. I’d be interested in learning more about the history here, if there is any. For what it’s worth, given the same homework of identifying four types of work from the hints Erik gave in the book, I might have broken it into either business or internal, and either planned or unplanned.

The Three Ways

These are presented as the guiding principles for DevOps. They’re introduced just as heavy-handedly as the four ways of work. At one point they go as far as saying that anybody managing IT without talking about the three ways is “managing IT on dangerously faulty assumptions”, which sounds like nonsense to me, especially given that they aren’t described as assumptions at all. Even still, the idea holds a bit more water for me. I can even see approaching these as series of prerequisites, each building on the next, or a maturity model (as long as you’re not the kind of person who is allergic to the words “maturity model”).

The three ways are:

  1. The flow of work from development, through operations, to the customer. I would add something like “business needs” to the front of that chain as well. The idea here is to make work move as smoothly and efficiently as possible from left to right, including limiting work in progress, catching defects early, and our old favourite CI/CD.
  2. The flow of feedback from customers back to the left, to be sure we prevent problems and make changes earlier in the pipeline to save work downstream, fueled by constant monitoring. The book includes fast automated test suites here, though to me that sounds more like part of the first way.
  3. This is where I got a bit lost; the third way is actually two things, neither of them “flows” or “ways” in the same sense as the first two. Part one is continual experimentation, part two is using repetition and practice as a way to build mastery. According to the authors, practice and repetition is what makes is possible to recover when experiments fail. To me, experiments shouldn’t require “recovery” when they fail at all, since they should be designed to be safe-to-fail in the first place. Maybe I would phrase this third way as constantly perturbing flows of the first two to find new maxima or minima.

What I like about these ways is that I can see applying them as a way to describe how a team is working and targeting incremental improvements. Not that a team needs to have 100% mastery of one way before moving on to the next (this is the straw man that people who rail against “maturity models” tend to focus on), but as a way of seeing some work as foundational to other work so as to target improvement where it will add the most integrity to the whole process. I’ve been thinking through similar ideas with respect to maturity of testing, and though I haven’t wrestled with this very much yet it feels promising.

Anecdotal value

I mentioned earlier that one weakness in the book is how easily everything changes for Bill as he convinces people around him to think about work differently. More to the point, it shows exactly one case study of organizational change. It’s essentially guaranteed that any person and organization with the same problems would react differently to the same attempts at change.

As I was thinking about what I wanted to say about this earlier this week, I saw this tweet from John Cutler:

It surprised me how thoroughly negative the feedback was in the replies. Many people seemed to immediately think that because solutions in one organization almost never translate exactly to another, hearing how others solved the same problem you’re having is of little value. Some asked how detailed the data would be. Others said they’d rather pay to hire the expert who studied those 50 cases as a consultant to provide customized recommendations for their org. Very few seemed to see it as a learning opportunity from which you could take elements of, say, 23 different solutions and tweak them to your context to form your own, while ignoring the elements and other solutions that aren’t as relevant to your own situation. Even if you hire the consultant as well, reading the case studies yourself will mean you will understand the consultant’s recommendations that much better, and can question them critically.

That’s why I think reading The Phoenix Project was worth it, even if it was idealized to the point where the main character seemingly has the Big Manager In The Sky speaking directly to him and everybody else treats him like a messiah (eventually). I can still take from it the examples that might apply to my work, internalize some of those lessons as I evaluate our own pipeline, and put the rest aside. Anecdotes can’t be scientific evidence that a solution to a problem is the right one, but these team dynamics aren’t an exact science anyway.

Agile Testing book club: Everyone is a Tester

If you’ve even dipped your toe into the online testing community, there’s a good chance that you’ve heard Agile Testing by Janet Gregory and Lisa Crispin as a recommended read. A couple weeks ago I got my hands on a copy and thought it would be a useful exercise to record the highlights of what I learn along the way. There is a lot in here, and I can tell that what resonates will be different depending on where I am mentally at the time. What I highlight will no doubt be just one facet of each chapter, and a different one from what someone else might read.

So, my main highlight from Chapters 1 and 2:

everyone is a tester

Janet and Lisa immediately made an interesting distinction that I would never have thought of before: they don’t use “developer” to refer to the people writing the application code, because everybody on an agile team is contributing to the development. I really like emphasizing this. I’m currently in an environment where we have “developers” writing code and “QA” people doing tests, and even though we’re all working together in an agile way, I can see how those labels can create a divide where there should not be one.

Similarly surprising and refreshing was this:

Some folks who are new to agile perceive it as all about speed. The fact is, it’s all about quality—and if it’s not, we question whether it’s really an “agile” team. (page 16)

The first time I encountered Agile, it was positioned by managers as being all about speed. Project managers (as they were still called) positioned it as all about delivering something of value sooner than possible otherwise, which is still just emphasizing speed in a different way. If asked myself, I probably would have said it was about being agile (i.e., able to adapt to change) because that was the aspect of it that made it worth adopting compared to the environment we worked in before. Saying it’s all about quality? That was new to me, but it made sense immediately, and I love it. Delivering smaller bits sooner is what lets you adapt and change based on feedback, sure, but you do that so you end up with something that everyone is happier with. All of that is about quality.

So now, if everybody on the team should be counted as a developer, and everything about agile is about delivering quality, it makes perfect sense that main drive for everybody on the team should be delivering that quality. The next step is obvious: “Everyone on an agile team is a tester.” Everyone on the team is a developer and everyone on the team is a tester. That includes the customer, the business analysts, the product owners, everybody. Testing has to be everybody’s responsibility for agile-as-quality to work. Otherwise how do you judge the quality of what you’re making? (Yes, the customer might be the final judge of quality means to them, but they can’t be the only tester any more than a tester can be.)

Now, the trick is to take that understanding and help a team to internalize it.