CAST 2018 Debrief

Last week I was lucky enough to attend the Conference of the Association of Software Testing, CAST 2018. I had been to academic conferences with collaborators before, and a local STAR conference here in Toronto, but this was my first time travelling for a professional conference in testing. The actual experience ended up being quite trying, and I ended up learning as much about myself as about testing. I don’t feel the need to detail my whole experience here, but I will highlight the top 5 lessons I took away from it.

1. “Coaching” is not what I hoped it was

I’ve been hearing a lot about “coaching” as a role for testers lately. I went to both Anne-Marie Charrett‘s tutorial and Jose Lima‘s talk on the subject thinking that it was a path I wanted to pursue. I went in thinking about using as a tool to change minds, instill a some of my passion for testing into the people I work with, and building up a culture of quality. I came away with a sense of coaching as more of a discussion method, a passive enterprise available for those who want to engage in it and useless for the uninterested. I suspect those who work as coaches would disagree, but that was nonetheless my impression.

One theme that came up from a few people, not just the speakers, was a distinction between coaching and teaching. This isn’t something I really understand, and is likely part of why I was expecting something else from the subject. I taught university tutorials for several years and put a lot of effort into designing engaging classes. To me, what I saw described as coaching felt like a subset of teaching, a particular style of pedagogy, not something that stands in contrast to it. Do people still hear “teaching” and think “lecturing”? I heard “coaching testing” and expected a broader mandate of education and public outreach that I associate with “teaching”.

Specifically, I was looking for insight on breaking through to people who don’t like testing, and who don’t want to learn about it, but very quickly saw that “coaching” wasn’t going to help me with that. At least not on the level at which we got into it in within one workshop. I am sure that this is something that would be interesting to hash out in a (meta) coaching session with people like Anne-Marie and Jose, even James Bach and Michael Bolton: i.e. people who have much more knowledge about how coaching can be used than I do.

2. I’m more “advanced” than I thought

My second day at the conference was spent in a class billed as “Advanced Automation” with Angie Jones (@techgirl1908). I chose this tutorial over other equally enticing options because it looked like the best opportunity for something technically oriented, and would produce a tangible artefact — an advanced automated test suite — that I could show off at home and assimilate aspects of into my own automation work.

Angie did a great job of walking us through implementing the framework and justifying the thought process each step of the way. It was a great exercise for me to go through implementing a java test suite from scratch, including a proper Page Object Model architecture and a TDD approach. It was my first time using Cucumber in java, and I quite enjoyed the commentary on hiring API testers as we implemented a test with Rest-Assured.

Though I did leave with that tangible working automation artefact at the end of the day, I did find that a reverse-Pareto principle at play with 80% of the value coming from the last 20% of the time. This is what lead to my take away that I might be more advanced than I had thought. I still don’t consider myself an expert programmer, but I think I could have gotten a lot further had we started with a basic test case already implemented. Interestingly Angie’s own description for another workshop of hers say “It’s almost impossible to find examples online that go beyond illustrating how to automate a basic login page,” though that’s the example we spent roughly half the day on. Perhaps we’ve conflated “advanced” with “well designed”.

3. The grass is sometimes greener

In any conference, talks will vary both in quality generally and how much they resonate with any speaker specifically. I was thrilled by John Cutler‘s keynote address on Thursday — he struck many chords about the connection between UX and testing that align very closely with my own work — but meanwhile Amit Wertheimer just wrote that he “didn’t connect at all” to it. I wasn’t challenged by Angie’s advanced automation class but certainly others in the room were. This is how it goes.

In a multi-track conference, there’s an added layer that there’s other rooms you could be in that you might get more value from. At one point, I found myself getting dragged down in a feeling that I was missing out on better sessions on the other side of the wall. Even though there were plenty of sessions where I know I was in the best room for myself, the chatter on Twitter and the conference slack workspace sometimes painted a picture of very green grass elsewhere. Going back to Amit’s post, he called Marianne Duijst‘s talk about Narratology and Harry Potter one of the highlights of the whole conference, and I’ve seen a few others echo the same sentiment on Twitter. I had it highlighted on my schedule from day one but at the last minute was enticed by the lightning talks session. I got pages of notes from those talks, but I can’t help but wonder what I missed. Social Media FOMO is real and it takes a lot of mental energy to break out of that negative mental cycle.

Luckily, the flip side of that kind of FOMO is that asking about a session someone else was in, or gave themselves, is a great conversation starter during the coffee breaks.

4. Networking is the worst

For other conferences I’ve been to, I had the benefit either of going with a group of collaborators I already knew or being a local so I could go home at 5 at not worry about dinner plans. Not true when flying alone across the continent. I’ve always been an introvert at the best of times, and I had a hard time breaking out of that to go “network”.

I was relieved when I came across Lisa Crispin writing about how she similarly struggled when she first went to conferences, although that might have helped me more last week than today. Though I’m sure it was in my imagination just as much as it was in hers at her first conference, I definitely felt the presence of “cliques” that made it hard to break in. Ironically, those that go to conferences regularly are less likely to see this happening, since those are the people that already know each other. Speakers and organizers even less so.

It did get much easier once we moved to multiple shorter sessions in the day (lots of coffee breaks) and an organized reception on Wednesday. I might have liked an organized meet-and-greet on the first day, or even the night before the first tutorial, where an introvert like me can lean a bit more on the social safety net of mandated mingling. Sounds fun when I put it like that, right?

I eventually got comfortable enough to start talking with people and go out on a limb here or there. I introduced myself to the all people I aimed to and asked all the questions I wanted to ask… eventually. But there were also a lot of opportunities that I could have taken better advantage of. At my next conference, this is something I can do better for myself, though it also gives me a bit more sensitivity about what inclusion means.

5. I’m ready to start preparing my own talk

Despite my introverted tendencies I’ve always enjoyed teaching, presenting demos, and giving talks. I’ve had some ideas percolating in the back of my mind about what I can bring to the testing community and my experiences this week — in fact every one of the four points above — have confirmed for me that speaking at a conference is a good goal for myself, and that I do have some value to add to the conversation. I have some work to do.

Bonus lessons: Pronouncing “Cynefin” and that funny little squiggle

Among the speakers, as far as notes-written-per-sentence-spoken, Liz Keogh was a pretty clear winner by virtue of a stellar lightning talk. Her keynote and the conversation we had afterward, however, is where I picked up these bonus lessons. I had heard of Cynefin before but always had two questions that never seemed to be answered in the descriptions I had read, until this week:

A figure showing the four domains of Cynefin

  1. It’s pronounced like “Kevin” but with an extra “N”
  2. The little hook or squiggle at the bottom of the Cynefin figure you see everywhere is actually meaningful: like a fold in some fabric, it indicates a change in height from the obvious/simple domain in the lower right from which you can fall into the chaotic in the lower left.

How I got into testing

In my first post, I talked a bit about why I decided to start this blog. I often get asked how I ended up in testing given my previous career seems so different, so I thought I would step back a few years and talk about what made testing such a good fit for me.

Before my first job in software testing, this is where I used to work:

The Giant Metrewave Radio TelescopeOr at least, that’s where I worked at least a few weeks out of the year while I was collecting data for my research. Before software testing, I was as astrophysicist.

My research involved using the Giant Metrewave Radio Telescope — three antennas of which are pictured above — to study the distribution of hydrogen gas billions of years ago. I was trying to study the universe’s transition from the “Dark Ages” before the first stars formed to the age of light that we know today. Though I didn’t know that what I was doing had anything to do with software testing (or even that “software testing” was its own thing), this is where I was honing the skills that I would need when I changed careers. There are two major reasons for that.

To completely over simplify, the first reason was that I spent a lot of time dealing with really buggy software.

Debugging data pipelines

At the end of the day we were trying to measure one number that nobody had ever measured before using methods nobody had ever tried. That’s what science is all about! What this meant on a practical level was that we had to figure out a way of recording data and processing it using almost entirely custom software. There were packages to do all the fundamental math for us, and the underlying scientific theory was well understood, but it was up to us to build the pipeline that would turn voltages on those radio antennas into the single temperature measurement we wanted.

With custom software, of course, comes custom bugs.

A lot of the code was already established by the research group before I took over, so I basically became the product owner and sole developer of a legacy system without any documentation (not even comments) on day one, and was tasked with extending it into new science without any guarantee that it actually worked in the first place. And believe me, it didn’t. I had signed up for an astrophysics program, but here I was learning how to debug Fortran.

I never got as far as writing explicit “tests”, but I certainly did have to test everything. Made a change to the code? Run the data through again and see if it comes out the same. Getting a weird result? Put through some simple data and see if something sensible comes out. Your 6-day long data reduction pipeline is crashing halfway through one out of every ten times? Requisition some nodes on the computing cluster, learn how to run a debugger, and better hope you don’t have anything else to do for the next week. If I didn’t find and fix the bugs, my research would either be meaningless or take unreasonably long to complete.

The second reason this experience set me up well for testing was that testing and science, believe it or not, are both all about asking questions and running experiments to find the answers.

Experiments are tests

I got into science because I wanted to know more about how the world worked. As a kid, I loved learning why prisms made rainbows and what made the pitch of a race car engine change as it drove by. Can you put the rainbow back in and get white light back out? What happens if the light hitting the prism isn’t white? How fast does the car have to go to break the sound barrier? What if the temperature of the air changes? What happens if the car goes faster than light? The questions got more complicated as I got more degrees under my belt, but the motivation was the same. What happens if we orient the telescopes differently? Or point at a different patch of sky? Get data at a different time of day? Add this new step to the data processing? How about visualizing the data between two steps?

When I left academia, the first company that hired me actually brought me on as a data engineer, since I had experience dealing with hundreds of terabytes at a time. The transition from “scientist” to “data scientists” seemed like it should be easy. But within the first week of training, I had asked so many questions and poked at their systems from so many different directions that they asked if I would consider switching to the test team. I didn’t see anything special about the way I was exploring their system and thinking up new scenarios to try, but they saw someone who knew how to test. What happens if you turn this switch off? What if I set multiple values for this one? What if I start putting things into these columns that you left out of the training notes? What if these two inputs disagree with each other? Why does the system let me input inconsistent data at all?

I may not have learned how to ask those questions because of my experience in science, but that’s the kind of thinking that you need both in a good scientists and in a good tester. I didn’t really know a thing about software engineering, but with years of teaching myself how to code and debug unfamiliar software I was ready to give it a shot.

Without knowing it, I had always been a tester. The only thing that really changed was that now I was testing software.