Unit tests versus the unit tested

I recently read the great and oft-cited article about testing microservice architectures by Cindy Sridharan over on Medium. It’s broadly applicable beyond just “microservices”, so I highly recommend giving it a read. I was struck by this passage in particular:

The main thrust of my argument wasn’t that unit testing is completely obviated by end-to-end tests, but that being able to correctly identifying the “unit” under test might mean accepting that the unit test might resemble what’s traditionally perceived as “integration testing” involving communication over a network.
— Cindy Sridharan, “Testing Microservices, the Sane Way

Articulating what you’re actually trying to test is one of the most underrated skills in testing. It seems like a simple question to ask, but often a forgotten one. Asking “what information does this test give me that no other test does?” is great heuristic for determining whether a test, especially an automated one, is worth keeping. It’s also a convenient go-to when evaluating whether a tester knows what they’re doing, and when trying to understand what a test does.

What interests me about this passage is that Cindy is highlighting how “the thing being tested” gets conflated with the “unit” in “unit testing”.

When I used to train new testers at my company, I would present four levels of testing: unit, component, integration, and system. Invariably, people would ask what the difference between component and integration was. Unit tests were easy because that tended to be the first (and sometimes) only kind of testing incoming devs were familiar with. System tests were easily understood as the ones at the end with everything up and running. But when are you testing a component versus an integration? It’s not obvious, so it’s no surprise that three-tiered descriptions are much more common these days.

For us, testing a component was sometimes a single service. At other times it was testing a well defined system of a larger program (one that probably would have been a microservice if we built the system from scratch). Inputs and outputs were controlled directly, and no other services needed to be running for it to communicate with. We defined a “unit” as the smallest possible thing that could be tested (often a single function) and the “component” was running the actual program.

Is that different from an “integration” test of those units? Not really. Running the component is an integration of smaller units. But it was convenient for us to separate those tests from testing how separate services communicated with each other. You can ask the same question anywhere along the continuum from function to system. Is testing a class a unit test or a component test? Where do you draw the lines?

The confusion highlighted in the passage above, to me, is only because of different definitions of “unit”. If you want to call the thing, the service or behaviour, that you’re testing a “unit” instead of a “component”, go for it. If you want to call the communication between two services the “unit” (as this article does), great. This ambiguity should not be an obstacle to understanding the point: you should know what you’re testing and have a reason for testing it.

And, of course, that you should be testing the most important things. Which means, for example, don’t mock away the database if your service’s main responsibility is writing to the database. The problem isn’t that Cindy is saying “unit tests should be communicating over the network,” though you might read it that way if you’re dogmatic about the term “unit test”. She’s saying “communicating over the network is important, so I’m going to prioritize that communication as the subject (unit) of my tests.”

For all these terms it’s unlikely we’re ever going to setting on unambiguous definitions. Let’s just try to be clear about what we mean when we use them, and clear about what we’re testing with each test.

Testing is like a box of rocks

I was inspired today by Alan Page’s Test Automation Snowman. He makes all good points, but let’s be honest, the model is the same as the good ol’ test pyramid. The only difference is that he’s being explicit about tests at the top of the pyramid being slow and tests at the bottom being fast. Ok, so maybe the snowman thing is a joke, but it did make me think about what might make a better visualization. I quickly doodled something on a sticky note:

A sticky note with a lot of tiny circles in the bottom third, medium circles in the middle third, and a few large circles in the top third.

If the point we want to emphasize is that UI tests are slow (and therefore take a lot of time), we should include that in the visualization! The problem with the pyramid (and the snowman) is that the big tests take up the least amount of space; the small triangle at the top makes it look like having fewer UI tests also means you do less UI testing.

It doesn’t.

At least, not proportionately. If you had an equal number of UI and unit tests, it’s a safe bet that you’re going to spend more of your time working on the UI tests.

So instead, let’s say testing is like a box of rocks. Each rock is a test, and I have to choose how to allocate the space in that box to fit all the rocks that I want. A few big rocks are going to take up a lot more space than a bunch of tiny pebbles. Unless I have a good reason why that big boulder is a lot more interesting than the hundred little rocks I could put in its place, I’m going to go for the little rocks! If I have to add a new rock (to kill a new bug, say) I probably want to choose the smallest one that’ll still do the job.

You can still think about the different levels (unit vs API vs UI, for example) if you picture the little rocks at the bottom forming a foundation for bigger rocks on top. I don’t know if rocks work like that. Just be careful not to get this whole thing confused with that dumb life metaphor.

Ok, it might not be the best model, but I’ll stick with it for now. And like the Alan’s snowman, you’re free to ignore this one too.