CAST 2018 Debrief

Last week I was lucky enough to attend the Conference of the Association of Software Testing, CAST 2018. I had been to academic conferences with collaborators before, and a local STAR conference here in Toronto, but this was my first time travelling for a professional conference in testing. The actual experience ended up being quite trying, and I ended up learning as much about myself as about testing. I don’t feel the need to detail my whole experience here, but I will highlight the top 5 lessons I took away from it.

1. “Coaching” is not what I hoped it was

I’ve been hearing a lot about “coaching” as a role for testers lately. I went to both Anne-Marie Charrett‘s tutorial and Jose Lima‘s talk on the subject thinking that it was a path I wanted to pursue. I went in thinking about using as a tool to change minds, instill a some of my passion for testing into the people I work with, and building up a culture of quality. I came away with a sense of coaching as more of a discussion method, a passive enterprise available for those who want to engage in it and useless for the uninterested. I suspect those who work as coaches would disagree, but that was nonetheless my impression.

One theme that came up from a few people, not just the speakers, was a distinction between coaching and teaching. This isn’t something I really understand, and is likely part of why I was expecting something else from the subject. I taught university tutorials for several years and put a lot of effort into designing engaging classes. To me, what I saw described as coaching felt like a subset of teaching, a particular style of pedagogy, not something that stands in contrast to it. Do people still hear “teaching” and think “lecturing”? I heard “coaching testing” and expected a broader mandate of education and public outreach that I associate with “teaching”.

Specifically, I was looking for insight on breaking through to people who don’t like testing, and who don’t want to learn about it, but very quickly saw that “coaching” wasn’t going to help me with that. At least not on the level at which we got into it in within one workshop. I am sure that this is something that would be interesting to hash out in a (meta) coaching session with people like Anne-Marie and Jose, even James Bach and Michael Bolton: i.e. people who have much more knowledge about how coaching can be used than I do.

2. I’m more “advanced” than I thought

My second day at the conference was spent in a class billed as “Advanced Automation” with Angie Jones (@techgirl1908). I chose this tutorial over other equally enticing options because it looked like the best opportunity for something technically oriented, and would produce a tangible artefact — an advanced automated test suite — that I could show off at home and assimilate aspects of into my own automation work.

Angie did a great job of walking us through implementing the framework and justifying the thought process each step of the way. It was a great exercise for me to go through implementing a java test suite from scratch, including a proper Page Object Model architecture and a TDD approach. It was my first time using Cucumber in java, and I quite enjoyed the commentary on hiring API testers as we implemented a test with Rest-Assured.

Though I did leave with that tangible working automation artefact at the end of the day, I did find that a reverse-Pareto principle at play with 80% of the value coming from the last 20% of the time. This is what lead to my take away that I might be more advanced than I had thought. I still don’t consider myself an expert programmer, but I think I could have gotten a lot further had we started with a basic test case already implemented. Interestingly Angie’s own description for another workshop of hers say “It’s almost impossible to find examples online that go beyond illustrating how to automate a basic login page,” though that’s the example we spent roughly half the day on. Perhaps we’ve conflated “advanced” with “well designed”.

3. The grass is sometimes greener

In any conference, talks will vary both in quality generally and how much they resonate with any speaker specifically. I was thrilled by John Cutler‘s keynote address on Thursday — he struck many chords about the connection between UX and testing that align very closely with my own work — but meanwhile Amit Wertheimer just wrote that he “didn’t connect at all” to it. I wasn’t challenged by Angie’s advanced automation class but certainly others in the room were. This is how it goes.

In a multi-track conference, there’s an added layer that there’s other rooms you could be in that you might get more value from. At one point, I found myself getting dragged down in a feeling that I was missing out on better sessions on the other side of the wall. Even though there were plenty of sessions where I know I was in the best room for myself, the chatter on Twitter and the conference slack workspace sometimes painted a picture of very green grass elsewhere. Going back to Amit’s post, he called Marianne Duijst‘s talk about Narratology and Harry Potter one of the highlights of the whole conference, and I’ve seen a few others echo the same sentiment on Twitter. I had it highlighted on my schedule from day one but at the last minute was enticed by the lightning talks session. I got pages of notes from those talks, but I can’t help but wonder what I missed. Social Media FOMO is real and it takes a lot of mental energy to break out of that negative mental cycle.

Luckily, the flip side of that kind of FOMO is that asking about a session someone else was in, or gave themselves, is a great conversation starter during the coffee breaks.

4. Networking is the worst

For other conferences I’ve been to, I had the benefit either of going with a group of collaborators I already knew or being a local so I could go home at 5 at not worry about dinner plans. Not true when flying alone across the continent. I’ve always been an introvert at the best of times, and I had a hard time breaking out of that to go “network”.

I was relieved when I came across Lisa Crispin writing about how she similarly struggled when she first went to conferences, although that might have helped me more last week than today. Though I’m sure it was in my imagination just as much as it was in hers at her first conference, I definitely felt the presence of “cliques” that made it hard to break in. Ironically, those that go to conferences regularly are less likely to see this happening, since those are the people that already know each other. Speakers and organizers even less so.

It did get much easier once we moved to multiple shorter sessions in the day (lots of coffee breaks) and an organized reception on Wednesday. I might have liked an organized meet-and-greet on the first day, or even the night before the first tutorial, where an introvert like me can lean a bit more on the social safety net of mandated mingling. Sounds fun when I put it like that, right?

I eventually got comfortable enough to start talking with people and go out on a limb here or there. I introduced myself to the all people I aimed to and asked all the questions I wanted to ask… eventually. But there were also a lot of opportunities that I could have taken better advantage of. At my next conference, this is something I can do better for myself, though it also gives me a bit more sensitivity about what inclusion means.

5. I’m ready to start preparing my own talk

Despite my introverted tendencies I’ve always enjoyed teaching, presenting demos, and giving talks. I’ve had some ideas percolating in the back of my mind about what I can bring to the testing community and my experiences this week — in fact every one of the four points above — have confirmed for me that speaking at a conference is a good goal for myself, and that I do have some value to add to the conversation. I have some work to do.

Bonus lessons: Pronouncing “Cynefin” and that funny little squiggle

Among the speakers, as far as notes-written-per-sentence-spoken, Liz Keogh was a pretty clear winner by virtue of a stellar lightning talk. Her keynote and the conversation we had afterward, however, is where I picked up these bonus lessons. I had heard of Cynefin before but always had two questions that never seemed to be answered in the descriptions I had read, until this week:

A figure showing the four domains of Cynefin

  1. It’s pronounced like “Kevin” but with an extra “N”
  2. The little hook or squiggle at the bottom of the Cynefin figure you see everywhere is actually meaningful: like a fold in some fabric, it indicates a change in height from the obvious/simple domain in the lower right from which you can fall into the chaotic in the lower left.

Debating the Modern Testing Principles

Last week I had the opportunity to moderate a discussion on the Modern Testing Principles being developed by Alan Page and Brent Jensen with a group of QA folks. I’m a relative late-comer to the AB Testing podcast, having first subscribed somewhere around Episode 60, but have been quite interested in this take on testing. Armed primarily with the 4 episodes starting from their initial introduction and some back-up from the article by the Ministry of Testing, we had a pretty interesting discussion.

Discussing the seven principles

After giving a bit of a preamble based on the mission statement—“Accelerate the Achievement of Shippable Quality”—we went through the principles one by one. For each one I asked (roughly) these questions:

  1. What do you think this statement means?
  2. How do you feel about this a core principle of testing?
  3. How well (or not) does this describe your approach to testing?
  4. Is this a principle you would adopt?

For the first four principles, there was a lot of agreement. We discussed building better products versus trying to assure the product’s quality, the importance of prioritization of tests and identifying bottlenecks, leaky safety nets, data-driven decisions, and the easy alignment with a whole-team Agile mindset. Then it started to get a bit more interesting.

Disagreement one: Judging Quality

The fifth principle started to get problematic for some people:

5. We believe that the customer is the only one capable to judge and evaluate the quality of our product.

There was a lot of debate here. Although a couple people were on board right away, the biggest question for most in the room was: who is the “customer”? Lots of people could fall into that category. Internally there are stakeholders in different parts of the business, product owners in our team, managers in our department, and team itself to some degree. We also have both our current end users and the people we want to attract into regular users. Some of you may have simpler environments with a clear cut individual client, but others could be even more complicated.

What we did agree on was that you have to use the product to be able to judge it. The people testing have to think like the customer and have a good idea of what their expectations are. Interestingly, when we changed “customer” to “anybody who uses the product”, everybody around the table could agree with the principle as a whole.

I suspect, though, that if we only say “anybody who uses the product is capable of judging and evaluating the quality of the product”, the statement loses its power. My feeling is that if this principle feels problematic in its original form, you may just not have a firm idea of who your customer really is. This just highlights for me how important it is to ask who’s opinion, at the end of the day, is the one that counts.

Disagreement two: The dedicated specialist

It’s likely unsurprising that a principle suggesting the elimination of the testing specialist would raise a few eyebrows in a group of testing specialists.

7. We expand testing abilities and knowhow across the team; understanding that this may reduce (or eliminate) the need for a dedicated testing specialist.

There was no disagreement with the first clause. Many people immediately connected it with the 4th principle, to “coach, lead, and nurture the team towards a more mature quality culture”. Surely endeavouring to “expand the testing abilities and know-how across the team” is a good way to achieve that. When the group initially discussed the 4th principle, we were all in agreement that we wanted to drive a culture of quality and a whole-team approach to testing.

I am still unsure whether the disagreement with eliminating the dedicated specialist was just a knee-jerk reaction or not. I tried to use an analogy of the tester-as-Mary-Poppins: She stays only as long as she is truly needed, and then takes to the wind again to find a new family in need. It didn’t seem to sell the point. We agreed that our teams should be able to function without us… temporarily. There was one assertion that QA was the most important part of the whole process and therefore could not be eliminated. Another one that the skills are different from other roles. And yet another that not everybody wants to be a dev. (Although, of course, the principle doesn’t end with “… so that they can become a developer.”)

Additional context from Alan and Brent helps here too. In some of the episodes after the principles were first introduced, they do talk about now not every tester needs to be a Capital-M Capital-T Modern Tester. I don’t believe the intent is to eventually eliminate the need for testing specialists full stop. It’s not even a given that the specialist would be eliminated on a particular team, just that the need for a specialist should be reduced. To me this principle is a corollary of reducing bottlenecks and building the testing know-how on the team, albeit phrased more provocatively.

Nonetheless, the closest we got to agreement on this was to say we could eventually eliminate the singular position of a testing specialist, but not eliminate the function.

Is that any different or just an easier pill to swallow?

Wrapping up

Both of these, the two biggest objections to the Modern Testing Principles, have a common theme. The 4th principle asserts that testers aren’t the judge of quality or even truly capable of evaluating it. The 7th pushes the idea that given the right expertise and know-how, a testing specialist may not even be needed. Both of these can feel like a threat. Both speak to a fear of losing agency. Alan and Brent also talked about this in the podcasts: one of the motivations for formulating these principles was to prepare people for how testing is changing so that we aren’t all caught unprepared. While I have doubts that there’s an apocalyptic testing singularity coming—something I plan to write on in another post—it does emphasize how important it is to be prepared for new ways of thinking in the industry.

To wrap up the discussion, we did a quick summary of the words and concepts that had come up as common themes in the principles. Then, to compare, I asked for testing concepts or buzzwords that had been conspicuously absent. Chief among the latter were automation, defect tracking, reporting, traceability, documentation, and not once did we talk about writing a test case. Highlighting what was not explicitly mentioned in the principles seemed to be a great way to highlight what makes this a different approach compared to a lot of our day-to-day experience. Though some of those “missing” elements may come out naturally as tools necessary to really embrace these principles, I felt it important to highlight that they were not the goal in and of themselves.

In the end, these differences and the disagreements were the most interesting part of the Modern Testing Principles. Alan described presenting the principles at Test Bash in much the same way—it’s not much fun if everybody just agrees with everything! Hopefully the discussions sparked some new ways of thinking, if only a little bit.

Agile Testing book club: Courage

This is the third part on my series highlighting lessons I’m taking out of reading Agile Testing by Lisa Crispin and Janet Gregory. Other entries in the series can be found here.

Chapter 4 is largely about transitioning non-agile processes to an agile workflow. There’s a lot here that would have been useful to me a couple years ago, but these days my work is great about not imposing cumbersome processes. Nonetheless, there was one passage that stood out:

On Courage

Courage is especially important. Get up and go talk to people; ask how you can help. Reach out to team members and other teams with direct communication. Notice impediments and ask the team to help remove them.

— Agile Testing, Lisa Crispin & Janet Gregory, Chapter 4

Often I find this one of my biggest personal challenges. Good communication is one of the most important elements of working as a team, and that means talking to people directly. I consider myself an introvert, and though I’m happy stepping up to lead a conversation when needed, it can be very easy to find excuses to avoid it. Sometimes I don’t want to bother someone or intrude, sometimes I’d rather avoid hashing out a point of disagreement.

There’s two main ways I try to overcome this.

Don’t borrow the jack

Some time ago, my husband told me a story about a man that got a flat tire out on a country road. According to Oprah.com the story originated with Danny Thomas in the 1950s. Quite a few versions have been nearly copy-pasted around the blogosphere already. The story goes that the man needed a borrow a jack to put his spare tire on. As he walked up to the closest farmhouse, he started imaging increasingly bad scenarios about what might happen. The farmer will probably demand money, get upset at being interrupted at the late hour, or just generally be an asshole. By the time the man gets to the front door, he’s worked himself up so much that he knows to expect the worst. When the farmer answers the knock, the man just shouts “You can keep your damn jack!” and storms off.

The moral of the story is to be careful about imagining worst-case scenarios and getting angry about hypotheticals. It’s this kind of thinking that leads to avoiding communication out of wanting to avoid conflict, even if the conflict is imaginary.  Give the guy a chance. and embrace the benefit of the doubt.

“Don’t borrow the jack” has become a shorthand now between my husband and I to warn ourselves about imagining the worst. I try to remind myself of that when I start getting into the mindset that I’d rather avoid talking to someone directly. Overcoming that mindset when it does set in can take a lot of courage.

Like going to the gym

Summoning the courage to get up and talk to someone isn’t always about overcoming conflict avoidance. Lisa and Janet point out an example in Chapter 5 of where processes in place can make it even more difficult:

Defect tracking systems certainly don’t promote communication between programmers and testers. They can make it easy to avoid talking directly to each other.

— Agile Testing, Lisa Crispin and Janet Gregory, Chapter 5

Too true. Written communication like defect trackers, documentation systems, and code review platforms have their purpose, but they’re also the easiest excuse to avoid conversation. A comment that provides background, context, or a clarification is great. One that continues a back-and-forth debate isn’t; go talk to the person and then come back to comment so there’s a record of the outcome. Five minutes talking in person could easily resolve something that would take hours or days in a writing. It can just as easily highlight a bigger issue at play that needs to be worked out.

The point is, as much as I might want to avoid it at times, I almost always come out of a face-to-face conversation happier for having done it. It’s like going to the gym. I may not want to, but just getting up and going for an hour is easier than agonizing over it all day, and is always worth it in the end.

Agile testing does, I think, ask more of us introverts than a documentation-heavy waterfall style. It can take courage to get up and talk to someone. Just don’t borrow the jack, and it’ll be worth it.

My first game of TestSphere

Today (as I write this, last week as it is published) I had my first experience playing TestSphere. I’ve had a deck for ages but only recently suggested trying to play it with the QA community of practice in my department. Going from never having played it at all to facilitating a session with a whole group was quite a leap and I wasn’t at all sure how it would go. Here’s some of my observations about the experience.

Test sphere cards laid out on a table

Seven thoughts about TestSphere

1. Ten’s a crowd: The weekly meeting of the group usually has anywhere from 4 to 16 people attending, with the typical number around 12. I planned on playing the standard game, which the box says is best for 4 to 8 people. I was prepared to split us into two groups if needed, but in the end tried playing with the full group of 10 that came that day.

2. One for all or a bunch for each: The instructions say to reveal one or more cards depending on the experience level of the group, though it’s not clear to me which way those should correlate. I decide to go with one card of each colour so there would be a variety of types ofthings to think about. This turned out to be exactly the wrong number. Though I deliberately put us as a small table, people still had to pick up cards from the middle to read them. As soon as we started, 5 people were reading cards and 5 people were doing nothing. Should I do this again, I would try one extreme or the other: 1 or 2 cards that the whole group could focus on together, or 3-5 cards each to think about independently and have people play cards from their own hand. In the latter case I can then imagine combo play (“I have a card that applies to that story!” or “I have an experience with that too, plus this other concept from my hand”) but let’s not get carried away.

3. Combining cards: Nobody attempted to combine multiple cards into a single story, which I thought would be part of the fun of trying to “win”. This may have just been because people were passing cards around one at a time rather than looking at them as a group. I suspect it would have been easier to combine cards with fewer people or ones that was already familiar with the cards.

4. Minimalism: We didn’t make use of most of the text on the cards. The examples are great and really show the amount of good work Beren Van Daele and the MoT put into designing the deck, but it was just too much to make use of in this format. While the extra text is useful to fully understand the concept, a minimal deck with just the concept, slogan, and a simple graphic might be less intimidating. (The Easter egg here is that Minimalism is one of the cards we talked about in our group today; going back and reading the card again I’m really torn by this since the examples really do illuminate it in a way the slogan alone doesn’t, and the three are so different from each other that even limiting it to one would not be quite the same.)

5. Waiting patiently: The group naturally developed a pattern of picking up new cards as soon as they came up and holding on to them until it was their turn to tell the story. I wouldn’t say that I expected it to be a raucous fight for cards and who got to tell their story first, but I didn’t expect it to be so calm and orderly either. Once or twice this resulted in someone who had picked up a card just to read it seemingly getting stuck into telling a story about that card whether they meant to or not.

6. Everybody had a story: The energy of the game varied quite a bit depending on who was speaking. Some people are just better story tellers or more comfortable with public speaking than others. Nonetheless, I was quite happy that nobody dominated the conversation too much, and by the end everybody had shared at least once. I had laid out a rule at the beginning that if two people had a story to share we would defer to whoever hadn’t spoken yet, but we only had to invoke it once.

7. My QA is not your QA: Several times I was surprised with the stories people told given the card they picked up, often struggling to see what the connection was. To me this illustrates how differently people think, which would keep this interesting to play with another group of people. Not only that, but they’ll likely work quite easily outside of QA circles. At one point we had only one person left who hadn’t collected any cards yet. “I’m a developer,” he said, “I only have developer stories.” But when prompted he was able to pick up a card just as easily as anybody else.

The forgotten debrief

In the end, we shared about 15 stories in 50 minutes. Overall I think it was a good experience, and it was a neat way to hear more about everybody’s experiences on other teams. Unfortunately I didn’t manage time well and we got kicked out of the meeting room before I had a chance to debrief with anybody about their experience with the game. Some ideas for focus questions I had jotted down (roughly trying to follow an ORID model) were:

  1. What are some of the concepts and examples that came up on the cards?
  2. Were there concepts someone else talked about that you also had a story for? Were any concepts totally new to you?
  3. Did anything surprise you about the experiences others shared? What did you learn about someone that you didn’t know before? What did or didn’t work well about this experience?

and finally:

  1. Would you play again?

Testing like you’re laughing

I was in a brainstorming meeting recently. The woman running the meeting started setting up an activity by dividing in the board into several sections. In one, she wrote “Lessons Learned” and in a second she wrote “Problem Areas”. The idea was that we’d each come up with a few ideas to put into each category and then discuss.

I immediately asked, “What if one of the lessons I learned is that we have a problem area?”

To her credit, she gave a perfectly thoughtful and reasonable answer about how to differentiate the two categories. The details don’t matter; what was important was that others in the room started joking that, as the “only QA” in the room, I immediately started testing her activity and trying to break it. This was all in good fun, and I joked along saying “Sorry, I didn’t mean to be hard on you right away.”

“You’re the QA,” she said, “It’s your job!”

This tickled an idea in the back of my mind but it didn’t come to me right away. Later that day, though, I realized what the answer to that should have been:

“As long as I’m QA-ing with you, not at you.”


Footnote: There’s nothing significant in the use of “QA” over “testing” here; I’m using “QA” only because that’s the lingo used where I am. It works just as well if you replace “QA” with “tester” and “QA-ing” with “testing”, whether or not you care about the difference.