Highlights from one day at Web Unleashed 2018

I pretended to be a Front-End Developer for the day on Monday and attended some sessions at FITC’s Web Unleashed conference. Here are some of the things I found interesting, paraphrasing my notes from each speaker’s talk.

Responsive Design – Beyond Our Devices

Ethan Marcotte talked about the shift from pages as the central element in web design to patterns. Beware the trap of “as goes the mockup, so goes the markup”. This means designing the priority of content, not just the layout. From there it’s easier to take a device-agnostic approach, where you start with the same baseline layout that works across all devices, and enhancing them from there based on the features supported. He wrapped up with a discussion of the value that having a style guide brings, and pointed to a roundup of tools for creating one by Susan Robertson, highlighting that centering on a common design language and naming our patterns helps us understand the intent and use them consistently.

I liked the example of teams struggling with the ambiguity in Atomic Design’s molecules and organisms because I have had the same problem the first time I saw it.

Think Like a Hacker – Avoiding Common Web Vulnerabilities

Kristina Balaam reviewed a few common web scripting vulnerabilities. The slides from her talk have demos of each attack on a dummy site. Having worked mostly in the back-end until relatively recently, cross-site scripting is still something I’m learning a lot about, so despite these being quite basic, I admit I spent a good portion of this talk thinking “I wonder if any of my stuff is vulnerable to this.” She pointed to OWASP as a great resource, especially their security audit checklists and code review guidelines. Their site is a bit of a mess to navigate but it’s definitely going into my library.

As a sidenote, her slides say security should “be paramount to QA”, though verbally she said “as paramount as QA”. Either way, this got me thinking about how it fits into customer-first testing, given that it’s often something that’s invisible to the user (until it’s painfully not). There may be a useful distinction there between modern testing as advocating for the user vs having all priorities set by the user, strongly dependent on the nature of that relationship.

Inclusive Form Inputs

Andréa Crofts gave several interesting examples (along with some do’s and don’ts) of how to make forms inclusive. The theme was generally to think of spectra, rather than binaries. Gender is the familiar one here, but something new to me was that you should offer the ability to select multiple options to allow a richer expression of gender identity. Certainly avoid “other” and anything else that creates one path for some people and another for everybody else. She pointed to an article on designing forms for gender by Sabrina Foncesca as a good reference. Also interesting was citizenship (there are many different legal statuses than just “citizen” and “non-citizen”) and the cultural assumptions that are built into the common default security questions. Most importantly: explain why you need the data at all, and talk to people about how to best ask for it. There are more resources on inclusive forms on her website.

Our Human Experience

Haris Mahmood had a bunch of great examples of how our biases creep into our work as developers. Google Translate, for one, treats the Turkish gender neutral pronoun “o” differently when used with historically male or female dominated jobs, just as a result of the learning algorithms being trained on historical texts. Failures in software recognizing people of dark skin was another poignant example. My takeaway: bias in means bias out.

My favourite question of the day came from a woman in the back: “how do you get white tech bros to give a shit?”

Prototyping for Speed & Scale

Finally, Carl Sziebert ran though a ton of design prototyping concepts. Emphasizing the range of possible fidelity in prototypes really helped to show how many different options there are to get fast feedback on our product ideas. Everything from low-fi paper sketches to high-fi user stories implemented in code (to be evaluated by potential users) can help us learn something. The Skeptic’s Guide to Low Fidelity Prototyping by Laura Busche might help convince people to try it, and Taylor Palmer’s list of every UX prototyping tool ever is sure to have anything you need for the fancier stages. (I’m particularly interested to take a closer look at Framer X for my React projects.)

He also talked about prototypes as a way to choose between technology stacks, as a compromise and collaboration tool, a way of identifying “cognitive friction” (repeated clicks and long time between actions to show that something isn’t behaving the way the user expects, for example), and a way of centering design around humans. All aspects that I want to start embracing. His slides have a lot of great visuals to go with these concepts.

Part of the fun of being at a front-end and design-focused conference was seeing how many common themes there are with the conversation happening in the testing space. Carl mentioned the “3-legged stool” metaphor that they use at Google—an engineer, a UX designer, and a PM—that is a clear cousin (at least in spirit if not by heritage) of the classic “3 amigos”—a business person, developer, and tester.

This will be all be good fodder to when a lead a UX round-table at the end of the month. You’d be forgiven for forgetting that I’m actually a tester.

99 character talk: Unautomated automated tests

Some of the testers over at testersio.slack.com were lamenting that it was quiet online today, probably due to some kind of special event that the rest of us weren’t so lucky to join. Some joked that we should have our own online conference without them, and Daniel Shaw had the idea for “99 character talks”, an online version of Ministry of Testing’s 99 second talks. Several people put contributions on Slack, and some made it into a thread for them the MoT Club.

The whole concept reminds me of a story I once heard about Richard Feynman. As I remember it, he was asked the following: If all civilization and scientific knowledge was about to be destroyed, but he could preserve one statement for future generations, what would he say to give them the biggest head start on rebuilding all that lost knowledge? His answer: “Everything is made of atoms.

Luckily the 99 character challenge wasn’t quite so dramatic as having to preserve the foundations of software testing before an apocalypse. So instead, for my contribution I captured a simpler bit of advice that has bit me in the ass recently:

Automated tests aren’t automated if they don’t run automatically. Don’t make devs ask for feedback.

99 characters exactly.

But because I can’t help myself, here’s a longer version:

The problem with automated tests*, as with any test, is that they can only give you feedback if you actually run them. I have a bunch of tests for a bunch of different aspects of the products I work on, but not all of them are run automatically as part of our build pipelines. This effectively means that they’re opt-in. If someone runs them, actively soliciting the feedback that these tests offer, then they’ll get results. But if nobody takes that active step, the tests do no good at all. One of the worst ways to find a bug, as I recently experienced, is to know that you had everything you needed to catch it in advance but just didn’t.

If your automated tests aren’t run automatically whenever something relevant changes, without requiring someone to manually trigger them, then they aren’t automated at all. An automated test that is run manually is still a manual test. It’s one that can be repeated reliably and probably faster than a human could do, sure, but the onus is still on a fallible bag of meat to run it. Build them into your pipeline and loop the feedback back to the programmer as soon as something goes wrong instead. More and more lately I’m realizing that building reliable and efficient pipelines is a key skill to get the most out of my testing.

Now to put my own advice into practice…


* The “tests vs checks” pedants in the room will have to be satisfied for the moment with the fact that writing “automated checks” would have make the talk 100 characters and therefore inadmissible. Sorry, but there were zero other ways to reduce the character count by 1.