It’s been two weeks since my last update. Lots has happened, although not all in the form of externally visible progress.
Wishful Coding and TDD
The simulation engine is coming along nicely, but I haven't leveraged it as much as I would like because setting up scenarios has been such a pain.
My friend Davis and I have been pairing on the simulation code quite a bit lately. Over the last couple of weeks we turned our attention to the biggest pain point in creating new scenarios: metrics reporting.
Given that metrics are a critical part of most systems, you might think that emitting and aggregating statistics would be a solved problem. As always the devil is in the details. Identifying the metrics for each scenario is easy. But each scenario needs slightly different numbers calculated in slightly different ways. As a result each scenario ended up with painfully repetitive, yet subtly and importantly different, custom metrics code. It was not obvious which part of the simulation code should be responsible for emitting stats and reporting aggregates.
Davis and I took a few runs at designing a shared metrics component. With each attempt, we ran into too many special cases. The potential for the proliferation of if statements made us shudder. So each time we abandoned our pretty-on-paper architecture and went back to the drawing board.
Eventually we decided to stop talking and start coding. We did what Abelson, Sussman, and Sussman called “Wishful Thinking” in their classic book, Structure and Interpretation of Computer Programs. We took one type of metric at a time and wrote the corresponding scenario code as though we already had the metrics API we wish we had. Then we test-drove the implementation of that API.
In the end, strict bottom-up TDD led us to create three specialized classes, each with responsibility for reporting on a specific type of metric: throughput, cycle time, and latency.
And then a magical thing happened.
The day after we finished the metrics code, we looked at the three specialized classes side by side in the editor. Suddenly the patterns in the larger whole became clear. Step by step we simplified and extracted shared responsibilities, deleting dozens of lines of code we’d only just written the day before. It took us less than an hour to refactor the three classes into one expressive and streamlined metrics class that reports on all the relevant metrics for the simulation entities in our scenarios.
It felt so good.
Although it might seem like a waste of time to write those three classes only to delete two-thirds of what we’d just written soon after, it was a far more efficient way to arrive at the right design. Top-down failed us. Bottom-up TDD was like crossing the river by feeling the stones. It enabled us to feel our way step-by-step through the problem space and find the right structure—with the right seams—that allowed everything to fall neatly into place when we were done.
Even though I already knew that TDD is a design approach, not a testing technique, I was still surprised by how clear the patterns became after we test-drove the code to the API we wish we had.
I genuinely believe that no matter how much time we spent talking through various architectures, we would not have arrived at such a clean implementation. I also do not believe we would have been able to test-drive that single class without first implementing the three variations. Paradoxically, taking the long way around in the code turned out to be the shortest path.
What I’m Reading
Since we have been working on metrics in the simulation, I’ve been reviewing Accelerate by Dr. Nicole Forsgren, Jez Humble, and Gene Kim. It is one of my favorite books. Drawing on data from hundreds of organizations, the book calls out the essential differences between mediocre and high performing organizations. Most recently I’ve been paying particular attention to Appendix B, “The Stats.” It's a fantastic compendium of all the metrics the authors gathered in their research and the results of their statistical analysis. It's also a perfect one-stop-shopping catalog of metrics to consider adding to the simulation.
The other book I have been reading recently is much older, but I only just learned about it. A few weeks ago I was chatting with fellow ODF9 colleague Prasanna Bhogale about the simulation. He asked, “Have you read Epstein and Axtell’s Growing Artificial Societies?” I had not. I hadn’t even heard of it. Before the end of our conversation I’d ordered it.
The book was published in 1996. It is a phenomenal detailed discussion of an agent-based simulation designed to explore social structures and group behaviors: Sugarscape. Agents have a metabolism and crave sugar. They locate the nearest source and move toward it, consuming sugar as they move. Unsuccessful agents starve; successful agents accumulate a surplus.
As the book progresses, the authors expand the framework: agents don't just live and die, they can also reproduce. Next the authors introduce another food source (spice) and the concept of trade. With each new mechanic, the simulation becomes richer and the interactions more interesting. This is one of those books I can’t just read straight through. Every few pages my brain explodes and I have to pause to pick up the scattered fragments of neurons before I can continue. (That's way more fun than it sounds.)
I had the good fortune to spend some time chatting with Rob Zuber of Circle CI as a guest on his podcast The Confident Commit. I always enjoy talking with Rob and had a lot of fun doing this podcast. We talked about simulations and bottlenecks and risk, as well as the origin story for the name Curious Duck. I hope you’ll give it a listen!
My near term plans for the simulation involve some tweaking of the internals, exploring more scenarios, and starting to poke at visualizations.
Things are taking longer than I expected. Turns out getting the details of the engine mechanics right is inherently a time-consuming and painstaking process. But it’s so rewarding when it all comes together. And given our experience with the metrics code, I'm convinced that taking the long way around—spending the time needed to tweak the engine—will end up being the shorter path.