Validated Learning with the Learn-Build-Measure Loop

Christina Entcheva

Learn-Build-Measure

Eric Ries’ Build-Measure-Learn is focused on facilitating sustainable innovation and lean product development. It aims to mitigate risk through validated learning.

I’m a big fan of Build-Measure-Learn, but through my experience in product management, development, and design, I’ve found it valuable to explicitly foreground and prioritize the discovery Learn phase.

All of Build-Measure-Learn is aimed at running experiments to gain validated learning, but it can be tempting to take the name literally and Build first, then Measure, then Learn.

By kicking off the loop with an Opportunity Assessment, we assure that we’re informing our process with user feedback from the start.

This is my approach to validated learning, I call it Learn-Build-Measure.

Learn

Start the Learn-Build-Measure loop with an Opportunity Assessment.

An Opportunity Assessment is a one-page document detailing what problem a potential feature is solving, and the likely impact of the solution to the business.

Opportunity Assessments illustrate via qualitative and quantitative data why a feature is a high impact opportunity for the business.

Opportunity Assessments:

  • Use data to generate a Job To Be Done and construct a hypothesis
  • Quantify potential growth to the business in the hypothesis
  • Include supporting qualitative & quantitative research
  • Link to supporting research (analytics, user interviews, usability tests, etc.) so that readers can access the source data directly
  • Consider any analytics that need to be built to track key metrics

Supporting research can be compiled from a range of inputs including:

  • User interview feedback
  • Usability testing (in-person or remote)
  • User analytics
  • Feedback from product team (designers, developers, product managers)
  • Feedback from stakeholders (customer success, C-suite, finance, sales, marketing)

Opportunity Assessments are led by the Product Owner, but ideally the whole team participates in the process and looks for opportunities. Make Opportunity Assessments visible to the product team so they have visibility into what’s coming down the pike and can highlight supporting research or call out concerns.

The Product Owner looks for opportunities on a daily basis, utilizing all available data inputs.

If you use Trello boards for project management like we do, a completed Opportunity Assessment can be turned into a Trello card and moved into Discussion.

Discuss the Opportunity Assessment during your Iteration Planning Meeting or weekly backlog grooming. Discuss any analytics tracking that needs to be built in order to measure key metrics in the hypothesis. Address any questions. Prioritize the Trello card as a team.

We’ll track how the hypothesis performs in the last phase, Measure.

Sample Opportunity Assessment

JTBD: When I’m looking at search results, I want to quickly see each person’s medical profession, So that I can more easily connect with the types of people I’m seeking.

Hypothesis:

  • 5% increase in MAU (monthly active users) clicking on search results
  • Of MAU who clicked, user sessions will last 5 seconds longer than the 2 weeks prior

Supporting Research:

  • 3 current MAU mentioned during in-person interviews that they have this Job To Be Done
  • Currently 17% of MAU use search monthly, but only 2% click on search results monthly
  • Current average monthly MAU session time is 23 seconds

Required analytics:

  • MAU (currently exists)
  • Click event on search results (currently exists)
  • Click through rate on search results (currently exists)
  • User session length (currently exists)

Build

Once you’ve completed your Opportunity Assessment, discussed analytics required to track your hypothesis’ key metrics, and addressed any questions, you’re ready to begin building!

Run your Build phase while keeping the Opportunity Assessment in mind, especially as it relates to scope: at what point of engineering effort is the potential gain to the business no longer worth it?

If your feature has the potential to bring in an extra $12,000 of revenue this quarter, but you spend $45,000 building it, is that still a win?

If your Opportunity Assessment requires an A/B test, and development work includes building two variants, building A/B test setup code, removing setup code, and removing the losing variant after the test finishes, at what point does the development work become more expensive than the potential impact to the business?

These scenarios are not intended to deter you or your team from experimenting, but to highlight that we want to focus on high impact opportunities with supporting data.

Measure

Once we’ve built our feature with analytics and deployed it, it’s time to watch metrics and see how our hypothesis performs. This is the Measure phase.

The sample size you’ll need to reach statistical significance will vary greatly, and this will affect the timeline of your experiments, but a good general baseline is to watch metrics for at least two weeks before making a determination about an outcome.

Gather your final analytics data on the same day that you launched your experiment, for example, if you launched on a Tuesday morning, try to gather your final data on the Tuesday morning two weeks later.

How did the actual outcome compare to the hypothesis? Was it less successful than projected? More? Was there no change?

Analyzing how a hypothesis performed refines our understanding of the product and its users, will help us make better predictions in the future, and can inform improvements to process and experiments.

After you’ve analyzed the outcome of your hypothesis, share it with the product team. Add the results of the experiment to the development card in Trello. Discuss it together, possibly during an IPM. Consider creating a Measure column in Trello, where cards live until findings have been discussed as a team. After discussing the outcome, move the Trello card to Done.

Discuss your findings with stakeholders and share how features are performing. Stakeholders are invested in the success of the product and have their own metrics to watch, so proactively keeping them in the loop using the same Learn-Build-Measure process builds a shared mental model of the product cycle.

Record the outcome of your experiment in a place that’s easily accessible for everyone to access.

That’s it! Hopefully you’ve learned something from this iteration, and can use those learnings to define future experiments.

Next, you can start the loop over again with a new Opportunity Assessment.