Rethinking velocity

I’ve been thinking about the concept of “velocity” in software development the last few days, in response to a couple cases recently where I’ve seen people express dislike for it. I can understand why, and the more I think about it the more I think that the concept does more harm than good.

When I was first introduced to the concept it was with the analogy of running a race. To win, a runner wants to increase their velocity, running the same distance in a shorter amount of time. Even though the distance is the same each time they run the race, with practice the runner can complete it faster and faster.

The distance run, in the software development analogy, is the feature completed. In Scrum, velocity is the measure how many story points a team completes in a sprint. Individual pieces of work are sized by their “complexity”, so with practice, a team should be able to increase their velocity by finishing work of a given complexity in less time. I have trouble with this first because story points are problematic at best, so any velocity you try to calculate will be easily manipulated. Since I’ve gotten into trouble with the Scrum word police before, I’m going to put that aside for a moment and say that the units you use don’t matter for what I’m talking about.

It should be fair to say that increasing velocity as Scrum defines it is about being able to do more complex work within a sprint without actually doing more work (more time, more effort), because the team gets better at doing that work. (This works both for a larger amount of a fixed complexity of work, or a sprint’s worth of work that is more complex than could have been done in previous sprints.) Without worry about some nebulous “points”, the concept is still about being do more than you could before in a fixed amount of time.

But that’s not what people actually hear when you say we need to “increase velocity”.

Rather, it feels like being asked to do the work faster and faster. Put the feature factory at full steam! You need to work faster, you need to get more done, you need to be able to finish any feature in less than two weeks. Asking how you can increase velocity doesn’t ask “how can we make this work easier?” It asks, “why is this taking so long?” It feels like a judgement, and so we react negatively to it.

While it certainly does make sense to try to make repeated work easier with each iteration, I don’t think that should be the goal of a team. The point of being agile as I’ve come to understand it (and I’ll go with a small “a” here again to avoid the language police) is to be flexible to change by encouraging shorter feedback cycles, which itself is only possible by delivering incrementally to customers earlier than if we worked in large process-heavy single-delivery projects.

Building working-ish versions of a widget and delivering incremental improvements more often might take longer to get to the finished widget, but with the constant corrections along the way the end result should actually be better than it otherwise would have been. And, of course, if the earlier iterations can meet some level of the customer’s need, then they starting getting value far sooner in the process as well. The complexity of the widget doesn’t change, but I’d be happy to take the lower velocity for a higher quality product.

I’m bringing it back to smaller increments and getting feedback because one of the conversations that led to this thinking was about whether putting our products in front of users sooner was the same as asking for increased velocity. Specifically, I said “If you aren’t putting your designs in front of users, you’re wasting your time.” In a sense, I am asking for something to be faster, and going faster means velocity, so these concepts get conflated.

The “velocity” I care about isn’t the number of points done in a given time frame (or number of stories, or number of any kind of deliverable.) What I care about is, how many feedback points did I get in that time? How many times did we check our assumptions? How sure are we that the work we’ve been doing is actually going to provide the value needed? Maybe “feedback frequency” is what we should be talking about.

A straight line from start to finish for a completed widget with feedback at the start and env, vs a looping line with seven points of feedback but longer to get to the end.
And this is generously assuming you have a good idea of what needs to be built in the first place.

Importantly, I’m not necessarily talking about getting feedback on a completed (or prototype) feature delivered to production. Much like I argued that you can demo things that aren’t done, there is information to be gained at every stage of development, from initial idea through design to prototypes and final polish. I’ve always been an information junkie, so I see any information about the outside world, be it anecdotal oddities to huge statistical models of tracking behaviours in your app. Even just making observations about the world, learning about your intended users’ needs before you know what to offer them, all feeds into this category. Too often this happens only once at the outset. A second time when all else is said and done if you’re lucky. I’m not well versed in the design and user experience side of thing yet, but I wager even the big picture, blue-sky, big steps and exploration we might want to do can still be checked against the outside world more often than most people think.

Much like “agile” and “automation“, the word “velocity” itself has become a distraction. People associate it with the sense of wanting to do the same thing faster and faster. What I actually want is to do things in smaller chunks, more often. Higher frequency adjustments to remain agile and build better products, not just rushing for the finish line.

Demo things that aren’t done

When I saw this tweet from John Cutler about demos:

I immediately composed this response, without even thinking about it:

I hesitated to say that I wrote it “without thinking” right now because apparently quite a few people agree in a much different sense. This described a natural and desired state of affairs for me, a natural extension of John’s tweet. It wasn’t a new or difficult idea for me. The internet, however, seemed to think I was an idiot.

Responses ranged from by-the-book “that isn’t how Scrum defines it”:

to pure disbelief that I would utter something so ignorant:

You know you’ve made it on Twitter when someone calls you inexperienced and ignorant!

One person even said it felt like a personal attack, though I have no idea why. Perhaps she’s also a Scrum Guide Fundamentalist? (Alas, Twitter makes it quite hard to find quote-tweets after the fact, so I can’t follow up.)

I thought I should take a few minutes to explain a bit more about what I meant.

Demos aren’t just for Scrum

This is the obvious one. As far as I know Scrum doesn’t own a trademark on the word “demo”. A follow up from Oluf Nissen said we shouldn’t be using a word from Scrum if we meant something else. We at least needed to start with The One True Scrum. Well, I did. I did that style of demos when I first started in software development, both before and after adopting Agile, and I’ve been in demos again recently that took that approach: Strictly at the end of the sprint, demo what is “done”. It was a dreadful approach for us because it delayed feedback and hid the progress on everything that wasn’t “done” (whatever that means). I have yet to see a development team where that approach works. We started with it, yes, but then moved on.

Demo as soon as you can use feedback

I think John’s original point touched on half of what I saw as the problem with this kind of demo. As I heard Liz Keogh say recently, knowledge goes stale faster than tickets on a board do. If I finish something small on the 3rd day of a 10 day sprint, I’m likely to not even remember that I did it by the time the official demo comes along. For something bigger, if I finish it on the 7th day and put it aside to work on something else for three days, there’s a much higher cost to go back to it than if I had feedback right away. So we should demo work when the work is done, not when a sprint is done.

The other aspect that I added—demoing stuff that isn’t done—stems from the same reason. If there is a demo meeting scheduled, I want to use that time for getting feedback on as many things as possible. There’s a priority to this, of course: stuff that is actually ready to be deployed probably goes first. But there is a spectrum. If I have a working prototype that isn’t production-ready yet, I can still throw it up on the screen to get feedback on how it works and whether there are any obvious complications that I’ve missed. Even earlier, I can go through the approach I’m planning to get feedback on the design before writing any code. I can demo research results, proposed tools or libraries, or code that someone wrote for something else that might be related.

One response on Twitter did touch on this aspect:

Any work I did this sprint, no matter what state it is in, is something I can demo. It’s something I should demo. In my experience there’s very little work that can’t benefit from taking this step, and if it isn’t the right audience to provide feedback then why are they there? The moment something is “done” is actually the worst time for feedback. Once it makes that transition, even if it’s just a mental re-categorization, there’s more inertia against change. Feedback while something is still in flux is like steering a moving car. Feedback after that car has stopped means starting the engine again and putting it in reverse.

Do this very, very often

Someone else suggested doing this only “very very rarely”, lest you build a reputation for vaporware. If I had to bet, I’d guess we were talking about different contexts. So let’s clear that up. I’m not talking about doing this sort of thing in a sales pitch. I’m not a sales guy, but demonstrating features that don’t actually exist yet to customers in order to sell them on a product does seem like a bad idea.

Rather, this is a demo to help answer the question “are we building the right thing?” That’s a question that should be asked early and often. The difference is both in the audience and in being clear about what you’re demonstrating. Is this the right audience to answer that question? And how are they likely to interpret the information you’re giving them?

In the context of active agile development, the time from a demo to a feature in production should be small, even if the demo happens early in development. The longer that lead time is, the more likely a perception of “vapourware” could become, but the blame for that rests with the long wait time itself. Again this was a quote-tweet so I can’t follow up further, but to me you’d have the same risk just from being transparent about when work on a feature started. Have a long enough cycle time and by the time the feature is deployed people will stop believing it will ever arrive. The solution is to fix (or justify) the long time, not hide it.

Demo things that aren’t done

Demos are for feedback, so don’t limit yourself to work that is “done” and therefore least easily changed. Make sure you have an audience for demos that can provide the feedback you need, and solicit that feedback often. Demo things that aren’t done.

Highlights from one day at Web Unleashed 2018

I pretended to be a Front-End Developer for the day on Monday and attended some sessions at FITC’s Web Unleashed conference. Here are some of the things I found interesting, paraphrasing my notes from each speaker’s talk.

Responsive Design – Beyond Our Devices

Ethan Marcotte talked about the shift from pages as the central element in web design to patterns. Beware the trap of “as goes the mockup, so goes the markup”. This means designing the priority of content, not just the layout. From there it’s easier to take a device-agnostic approach, where you start with the same baseline layout that works across all devices, and enhancing them from there based on the features supported. He wrapped up with a discussion of the value that having a style guide brings, and pointed to a roundup of tools for creating one by Susan Robertson, highlighting that centering on a common design language and naming our patterns helps us understand the intent and use them consistently.

I liked the example of teams struggling with the ambiguity in Atomic Design’s molecules and organisms because I have had the same problem the first time I saw it.

Think Like a Hacker – Avoiding Common Web Vulnerabilities

Kristina Balaam reviewed a few common web scripting vulnerabilities. The slides from her talk have demos of each attack on a dummy site. Having worked mostly in the back-end until relatively recently, cross-site scripting is still something I’m learning a lot about, so despite these being quite basic, I admit I spent a good portion of this talk thinking “I wonder if any of my stuff is vulnerable to this.” She pointed to OWASP as a great resource, especially their security audit checklists and code review guidelines. Their site is a bit of a mess to navigate but it’s definitely going into my library.

As a sidenote, her slides say security should “be paramount to QA”, though verbally she said “as paramount as QA”. Either way, this got me thinking about how it fits into customer-first testing, given that it’s often something that’s invisible to the user (until it’s painfully not). There may be a useful distinction there between modern testing as advocating for the user vs having all priorities set by the user, strongly dependent on the nature of that relationship.

Inclusive Form Inputs

Andréa Crofts gave several interesting examples (along with some do’s and don’ts) of how to make forms inclusive. The theme was generally to think of spectra, rather than binaries. Gender is the familiar one here, but something new to me was that you should offer the ability to select multiple options to allow a richer expression of gender identity. Certainly avoid “other” and anything else that creates one path for some people and another for everybody else. She pointed to an article on designing forms for gender by Sabrina Foncesca as a good reference. Also interesting was citizenship (there are many different legal statuses than just “citizen” and “non-citizen”) and the cultural assumptions that are built into the common default security questions. Most importantly: explain why you need the data at all, and talk to people about how to best ask for it. There are more resources on inclusive forms on her website.

Our Human Experience

Haris Mahmood had a bunch of great examples of how our biases creep into our work as developers. Google Translate, for one, treats the Turkish gender neutral pronoun “o” differently when used with historically male or female dominated jobs, just as a result of the learning algorithms being trained on historical texts. Failures in software recognizing people of dark skin was another poignant example. My takeaway: bias in means bias out.

My favourite question of the day came from a woman in the back: “how do you get white tech bros to give a shit?”

Prototyping for Speed & Scale

Finally, Carl Sziebert ran though a ton of design prototyping concepts. Emphasizing the range of possible fidelity in prototypes really helped to show how many different options there are to get fast feedback on our product ideas. Everything from low-fi paper sketches to high-fi user stories implemented in code (to be evaluated by potential users) can help us learn something. The Skeptic’s Guide to Low Fidelity Prototyping by Laura Busche might help convince people to try it, and Taylor Palmer’s list of every UX prototyping tool ever is sure to have anything you need for the fancier stages. (I’m particularly interested to take a closer look at Framer X for my React projects.)

He also talked about prototypes as a way to choose between technology stacks, as a compromise and collaboration tool, a way of identifying “cognitive friction” (repeated clicks and long time between actions to show that something isn’t behaving the way the user expects, for example), and a way of centering design around humans. All aspects that I want to start embracing. His slides have a lot of great visuals to go with these concepts.

Part of the fun of being at a front-end and design-focused conference was seeing how many common themes there are with the conversation happening in the testing space. Carl mentioned the “3-legged stool” metaphor that they use at Google—an engineer, a UX designer, and a PM—that is a clear cousin (at least in spirit if not by heritage) of the classic “3 amigos”—a business person, developer, and tester.

This will be all be good fodder to when a lead a UX round-table at the end of the month. You’d be forgiven for forgetting that I’m actually a tester.

The Phoenix Project & the value of anecdotes

Book cover of The Phoenix ProjectI generally have a hard time reading non-fiction books. Though I always love learning, I rarely find them engaging enough that I look forward to continuing them the way I do with a novel. If the topic is related to work, I usually find myself wanting to take notes, but I usually only make time for reading in bed as I’m going to sleep. Even if I wasn’t going to take notes, I don’t like keeping my mind on work 24/7. Downing a cup of coffee just before bed has about the same effect as starting to contemplate “software testing this”, “Agile that”, and “product management the other”.

That’s why I was interested to give The Phoenix Project a try. It’s billed as a novel about DevOps, and was recommended by a coworker as providing some interesting insights. I decided to approach it like a novel, without thinking too much about it being about work-related, and put it on my nightstand.

The book starts with Bill, the IT manager of a struggling manufacturing company, and follows his attempts to make improvements to how IT is handled. I liked it immediately because the worst-case scenario that Bill starts the story in was eerily familiar to some of my own experience in less-than-healthy organizations.

The book reads quite well, though there are times where it feels more like a contrived example from a management textbook than a novel. This is especially true whenever the character Erik appears. His primary role is to show Bill the error of his ways, usually by pointing out that IT work—development and operations—are more like manufacturing he would previously have admitted. This really strains credulity when Erik gives Bill challenges like “call me when you figure out what the four kinds of work are.” As if there is a unique way of describing types of work with exactly and only four categories. But Bill does figure them out, correctly, and the plot moves on. Erik’s “Three Ways” were given a similar treatment. There are a lot of good lessons in this book, but I doubt anybody who’s been around the block a few times will believe how naturally Bill divines these concepts from first principles. Nor how easily he puts them into practice to turn the whole IT culture around, with just one token protest character that gets quickly sidelined.

Nonetheless, I do think the book achieved what it set out to do. The two major themes are worth outlining.

Four types of work

Early on Bill talks about how he needs to get “situational awareness” of where the problems are. One of the tools he arrives at, with Erik’s prompting, is visualizing the work his department is doing, which makes it obvious that there are types of work that had previously been invisible. The categories he identifies (after Erik told him that there were exactly four) are:

  1. Business Projects – the stuff that pays the bills.
  2. Internal IT Projects – Infrastructure changes, improvements, and technical debt would fall here.
  3. Changes – “generated from the previous two”, though honestly I don’t see why this exists independently of the first two.
  4. Unplanned work – arising from incidents or problems that need to be addressed at the expense of other work.

Certainly I like the lesson of visualizing all the work you’re doing and identifying where it comes from, but I don’t understand this particular breakdown. Is a “change” to an existing system really a whole separate category of work from a new project? What value does separating that provide? If the distinction between business and IT projects is significant, why isn’t there a difference between changes to business features and internal IT changes?

It wasn’t clear to me if these four types of work are something that can be found in the literature, where there might be some more justification for them, or if they are an invention of the authors. I’d be interested in learning more about the history here, if there is any. For what it’s worth, given the same homework of identifying four types of work from the hints Erik gave in the book, I might have broken it into either business or internal, and either planned or unplanned.

The Three Ways

These are presented as the guiding principles for DevOps. They’re introduced just as heavy-handedly as the four ways of work. At one point they go as far as saying that anybody managing IT without talking about the three ways is “managing IT on dangerously faulty assumptions”, which sounds like nonsense to me, especially given that they aren’t described as assumptions at all. Even still, the idea holds a bit more water for me. I can even see approaching these as series of prerequisites, each building on the next, or a maturity model (as long as you’re not the kind of person who is allergic to the words “maturity model”).

The three ways are:

  1. The flow of work from development, through operations, to the customer. I would add something like “business needs” to the front of that chain as well. The idea here is to make work move as smoothly and efficiently as possible from left to right, including limiting work in progress, catching defects early, and our old favourite CI/CD.
  2. The flow of feedback from customers back to the left, to be sure we prevent problems and make changes earlier in the pipeline to save work downstream, fueled by constant monitoring. The book includes fast automated test suites here, though to me that sounds more like part of the first way.
  3. This is where I got a bit lost; the third way is actually two things, neither of them “flows” or “ways” in the same sense as the first two. Part one is continual experimentation, part two is using repetition and practice as a way to build mastery. According to the authors, practice and repetition is what makes is possible to recover when experiments fail. To me, experiments shouldn’t require “recovery” when they fail at all, since they should be designed to be safe-to-fail in the first place. Maybe I would phrase this third way as constantly perturbing flows of the first two to find new maxima or minima.

What I like about these ways is that I can see applying them as a way to describe how a team is working and targeting incremental improvements. Not that a team needs to have 100% mastery of one way before moving on to the next (this is the straw man that people who rail against “maturity models” tend to focus on), but as a way of seeing some work as foundational to other work so as to target improvement where it will add the most integrity to the whole process. I’ve been thinking through similar ideas with respect to maturity of testing, and though I haven’t wrestled with this very much yet it feels promising.

Anecdotal value

I mentioned earlier that one weakness in the book is how easily everything changes for Bill as he convinces people around him to think about work differently. More to the point, it shows exactly one case study of organizational change. It’s essentially guaranteed that any person and organization with the same problems would react differently to the same attempts at change.

As I was thinking about what I wanted to say about this earlier this week, I saw this tweet from John Cutler:

It surprised me how thoroughly negative the feedback was in the replies. Many people seemed to immediately think that because solutions in one organization almost never translate exactly to another, hearing how others solved the same problem you’re having is of little value. Some asked how detailed the data would be. Others said they’d rather pay to hire the expert who studied those 50 cases as a consultant to provide customized recommendations for their org. Very few seemed to see it as a learning opportunity from which you could take elements of, say, 23 different solutions and tweak them to your context to form your own, while ignoring the elements and other solutions that aren’t as relevant to your own situation. Even if you hire the consultant as well, reading the case studies yourself will mean you will understand the consultant’s recommendations that much better, and can question them critically.

That’s why I think reading The Phoenix Project was worth it, even if it was idealized to the point where the main character seemingly has the Big Manager In The Sky speaking directly to him and everybody else treats him like a messiah (eventually). I can still take from it the examples that might apply to my work, internalize some of those lessons as I evaluate our own pipeline, and put the rest aside. Anecdotes can’t be scientific evidence that a solution to a problem is the right one, but these team dynamics aren’t an exact science anyway.

CAST 2018 Debrief

Last week I was lucky enough to attend the Conference of the Association of Software Testing, CAST 2018. I had been to academic conferences with collaborators before, and a local STAR conference here in Toronto, but this was my first time travelling for a professional conference in testing. The actual experience ended up being quite trying, and I ended up learning as much about myself as about testing. I don’t feel the need to detail my whole experience here, but I will highlight the top 5 lessons I took away from it.

1. “Coaching” is not what I hoped it was

I’ve been hearing a lot about “coaching” as a role for testers lately. I went to both Anne-Marie Charrett‘s tutorial and Jose Lima‘s talk on the subject thinking that it was a path I wanted to pursue. I went in thinking about using as a tool to change minds, instill a some of my passion for testing into the people I work with, and building up a culture of quality. I came away with a sense of coaching as more of a discussion method, a passive enterprise available for those who want to engage in it and useless for the uninterested. I suspect those who work as coaches would disagree, but that was nonetheless my impression.

One theme that came up from a few people, not just the speakers, was a distinction between coaching and teaching. This isn’t something I really understand, and is likely part of why I was expecting something else from the subject. I taught university tutorials for several years and put a lot of effort into designing engaging classes. To me, what I saw described as coaching felt like a subset of teaching, a particular style of pedagogy, not something that stands in contrast to it. Do people still hear “teaching” and think “lecturing”? I heard “coaching testing” and expected a broader mandate of education and public outreach that I associate with “teaching”.

Specifically, I was looking for insight on breaking through to people who don’t like testing, and who don’t want to learn about it, but very quickly saw that “coaching” wasn’t going to help me with that. At least not on the level at which we got into it in within one workshop. I am sure that this is something that would be interesting to hash out in a (meta) coaching session with people like Anne-Marie and Jose, even James Bach and Michael Bolton: i.e. people who have much more knowledge about how coaching can be used than I do.

2. I’m more “advanced” than I thought

My second day at the conference was spent in a class billed as “Advanced Automation” with Angie Jones (@techgirl1908). I chose this tutorial over other equally enticing options because it looked like the best opportunity for something technically oriented, and would produce a tangible artefact — an advanced automated test suite — that I could show off at home and assimilate aspects of into my own automation work.

Angie did a great job of walking us through implementing the framework and justifying the thought process each step of the way. It was a great exercise for me to go through implementing a java test suite from scratch, including a proper Page Object Model architecture and a TDD approach. It was my first time using Cucumber in java, and I quite enjoyed the commentary on hiring API testers as we implemented a test with Rest-Assured.

Though I did leave with that tangible working automation artefact at the end of the day, I did find that a reverse-Pareto principle at play with 80% of the value coming from the last 20% of the time. This is what lead to my take away that I might be more advanced than I had thought. I still don’t consider myself an expert programmer, but I think I could have gotten a lot further had we started with a basic test case already implemented. Interestingly Angie’s own description for another workshop of hers say “It’s almost impossible to find examples online that go beyond illustrating how to automate a basic login page,” though that’s the example we spent roughly half the day on. Perhaps we’ve conflated “advanced” with “well designed”.

3. The grass is sometimes greener

In any conference, talks will vary both in quality generally and how much they resonate with any speaker specifically. I was thrilled by John Cutler‘s keynote address on Thursday — he struck many chords about the connection between UX and testing that align very closely with my own work — but meanwhile Amit Wertheimer just wrote that he “didn’t connect at all” to it. I wasn’t challenged by Angie’s advanced automation class but certainly others in the room were. This is how it goes.

In a multi-track conference, there’s an added layer that there’s other rooms you could be in that you might get more value from. At one point, I found myself getting dragged down in a feeling that I was missing out on better sessions on the other side of the wall. Even though there were plenty of sessions where I know I was in the best room for myself, the chatter on Twitter and the conference slack workspace sometimes painted a picture of very green grass elsewhere. Going back to Amit’s post, he called Marianne Duijst‘s talk about Narratology and Harry Potter one of the highlights of the whole conference, and I’ve seen a few others echo the same sentiment on Twitter. I had it highlighted on my schedule from day one but at the last minute was enticed by the lightning talks session. I got pages of notes from those talks, but I can’t help but wonder what I missed. Social Media FOMO is real and it takes a lot of mental energy to break out of that negative mental cycle.

Luckily, the flip side of that kind of FOMO is that asking about a session someone else was in, or gave themselves, is a great conversation starter during the coffee breaks.

4. Networking is the worst

For other conferences I’ve been to, I had the benefit either of going with a group of collaborators I already knew or being a local so I could go home at 5 at not worry about dinner plans. Not true when flying alone across the continent. I’ve always been an introvert at the best of times, and I had a hard time breaking out of that to go “network”.

I was relieved when I came across Lisa Crispin writing about how she similarly struggled when she first went to conferences, although that might have helped me more last week than today. Though I’m sure it was in my imagination just as much as it was in hers at her first conference, I definitely felt the presence of “cliques” that made it hard to break in. Ironically, those that go to conferences regularly are less likely to see this happening, since those are the people that already know each other. Speakers and organizers even less so.

It did get much easier once we moved to multiple shorter sessions in the day (lots of coffee breaks) and an organized reception on Wednesday. I might have liked an organized meet-and-greet on the first day, or even the night before the first tutorial, where an introvert like me can lean a bit more on the social safety net of mandated mingling. Sounds fun when I put it like that, right?

I eventually got comfortable enough to start talking with people and go out on a limb here or there. I introduced myself to the all people I aimed to and asked all the questions I wanted to ask… eventually. But there were also a lot of opportunities that I could have taken better advantage of. At my next conference, this is something I can do better for myself, though it also gives me a bit more sensitivity about what inclusion means.

5. I’m ready to start preparing my own talk

Despite my introverted tendencies I’ve always enjoyed teaching, presenting demos, and giving talks. I’ve had some ideas percolating in the back of my mind about what I can bring to the testing community and my experiences this week — in fact every one of the four points above — have confirmed for me that speaking at a conference is a good goal for myself, and that I do have some value to add to the conversation. I have some work to do.

Bonus lessons: Pronouncing “Cynefin” and that funny little squiggle

Among the speakers, as far as notes-written-per-sentence-spoken, Liz Keogh was a pretty clear winner by virtue of a stellar lightning talk. Her keynote and the conversation we had afterward, however, is where I picked up these bonus lessons. I had heard of Cynefin before but always had two questions that never seemed to be answered in the descriptions I had read, until this week:

A figure showing the four domains of Cynefin

  1. It’s pronounced like “Kevin” but with an extra “N”
  2. The little hook or squiggle at the bottom of the Cynefin figure you see everywhere is actually meaningful: like a fold in some fabric, it indicates a change in height from the obvious/simple domain in the lower right from which you can fall into the chaotic in the lower left.