4 min read

User Interview Feedback

Over the last few weeks I’ve conducted a bunch of user interviews. Seventeen, to be precise. I also opened up a pre-pre-pre-release version on the web so folks can play with it. Just like with the Guess the Pattern game, I instrumented the web app so I could see patterns in the usage. I am so incredibly grateful to everyone who took time to meet with me, experience the demo scenario, use the pre-pre-pre-release version, offer their feedback, and report bugs. I learned a lot from the combination of user interviews and instrumentation. Here are my top insights.

User Experience

Some of the things I learned from the way folks reacted to the demo will inform how I design scenarios moving forward. In particular, future scenarios need:

More visibility. The demo scenario had charts that updated as the simulation ran. But metrics alone aren't enough. Multiple folks indicated they wanted greater visibility into the work flowing through the system: What is in the backlog? Are the developers busy or idle? What are they working on?

Clear distinction between decisions and context. The scenario has numerous parameters, but they’re a mix of things that can and can’t be directly controlled in the real world. So although a manager can potentially hire more developers or make stories smaller, they can’t actually dictate the number of support requests or bugs that come in per week. Future scenarios need to make the distinction between context parameters and decisions clear.

In-game decisions. The demo scenario involves setting up all the conditions at the beginning, then running it. In the demos, folks were itching to change things mid-run. I hypothesize that adding in-game decision points will make scenarios more engaging. In addition, it will be a good way to make a distinction between the parameters that define the context and decisions you can actually make in the real world.

Explicit, visible, and consistent tradeoffs. Changing parameters in the demo scenario affects the outcome, but often in subtle ways and with limited penalties. For example, there is no cost for adding developers beyond increasing the scope of the release. (My intent was not to penalize hiring but rather to mirror the reality that growing a team usually comes with an expectation that the team will deliver more.) That increase in scope was too subtle: folks didn't notice it until I pointed it out. Future scenarios need to impose a cost tradeoff where appropriate and make the tradeoffs more visible.

Criteria for winning. Although there are clear goals in the scenario (a release deadline and a desired SLA), it is still quite open ended. Some folks found that frustrating. Most tried parameter combinations designed to “win.” Many explicitly asked “What does winning look like?” On a related note, a few folks indicated that they really wanted a score so they could see how well they did, and so they could beat their score on subsequent runs. Future scenarios need to provide feedback to the user, whether in the form of a score, stars, or some other indicator of relative success such as stakeholder happiness.

Replayability. The usage data shows that very few people run the scenario more than a handful of times. Something’s missing. I plan to experiment with scoring and different styles of interaction (such as adding decision points) to increase engagement.

Commercial Use Cases

Here’s the thing about business: even if you create something that people love, you might not have a commercial success on your hands. The things that people love and the things they will pay for are not always the same.

I learned this the hard way a decade ago. In 2010, I launched Agilistry Studio, a training and event space in Pleasanton, California. I dubbed it a “practice space for agile software development,” hosted my own classes there, and marketed it to other agile trainers and coaches as a training venue. Sadly, although there were many visitors and even a few ardent fans, I couldn’t book enough paying events to cover rent. By the end, all of the events in the space were free community meetups. I closed the space two years after opening the doors. It was a community success but financial failure. (In fact, the community meetup group kept going for much longer than the studio. Absolutely warmed my heart.)

During that same time period, I launched a website, Entaggle, as a peer-driven alternative to certifications. I wanted people to be able to give and get recognition from each other, not just from certification bodies. Entaggle garnered a lot of attention back then and I still get asked about it occasionally even now. Sadly, I never figured out a business model that made any sense. Ultimately Entaggle wasn’t a business, it was a feature. LinkedIn tags don’t provide exactly the same capabilities but they’re pretty close.

Both Entaggle and Agilistry were beloved (to be ruthlessly fair, each by a relatively small group of people). Neither was commercially viable. Both are long gone.

I don’t want this endeavor to suffer the same fate.

That means it’s not enough for me to understand what folks think about the demo scenario. I also need to know what it would take to make people see it as something worth paying for.

When I asked the money question in the user interviews, I discovered something interesting: although a large number of folks were keen to model their own context and predict outcomes from possible decisions, very few said they would pay for such a thing. How fascinating! As an open ended, customizable simulation, it’s a novelty but not a business. To be a business, there has to be something with more compelling and concrete value.

So what would it take to make it commercially viable?

The most common suggestion was to make it the foundation for software process training. So that’s the direction I’m steering toward. Since training is a use case I had on my list of possibilities, this isn’t so much a pivot as a narrowing of focus. That said, there’s still a lot of possible directions even within the training space and much more experimentation ahead.

What’s Next

I plan to keep building out the proof of concept pre-pre-pre-release site. I’m currently working on revising the first scenario from the user interview feedback. I’ll also be adding a second scenario to explore a different set of tradeoffs. Along the way I’ll be adjusting the design to make each scenario more focused on a specific learning objective to support the training hypothesis.

I’ve paused the user interviews for now but will start up again when there’s enough new content. In the meantime if you’d like access to the pre-pre-pre-release site, please email me at quack@curiousduck.io.

Stay Curious,

Elisabeth