How We Use Lean Stack for Innovation Accounting

I introduced Lean Stack in my last 2 posts – Part 1 and Part 2. This is a follow-up on how we are using Lean Stack today as our Innovation Accounting framework.

What is Innovation Accounting?

Innovation Accounting is a term Eric Ries described in his book, The Lean Startup:

To improve entrepreneurial outcomes, and to hold entrepreneurs accountable, we need to focus on the boring stuff: how to measure progress, how to setup milestones, how to prioritize work. This requires a new kind of accounting, specific to startups.

Innovation Accounting effectively helps startups to define, measure, and communicate progress. That last part is key.

The true job of entrepreneurs is systematically de-risking their startups over time through a series of conversations. Success lies at the intersection of these conversations and each has a specific function and protocol.

For example,

  • with customers, we first use interviews and observation techniques to inform our problem understanding, then follow up with an offer and MVP to test our solution.
  • with investors, we first use pitches to inform our Business Model understanding, and then use periodic board meetings to update that understanding.

Today, I’d like to specifically focus on the conversations we have with our teams.

Experiments are Where the Action’s At

Your initial vision and implementation strategy go through lots of initial thrashing (as they should) but after a while they (should) start to stabilize.

The goal of a Lean Startup is to inform our riskiest business model assumptions through empirical testing with customers – not rhetorical reasoning on a white board.

The focus then shifts more towards empirical validation of your vision and strategy through experiments.

Even though running experiments is a key activity in Lean Startups, correctly defining, running, and tracking them is quite hard.

Here are a few key points to keep in mind:

Experiments are additive versus standalone

There is a natural tension between keeping experiments small and fast, and the expectation of uncovering big insights. The key is realizing that most experiments aren’t standalone.

You will probably never run a single experiment that will remove all risk from your business model in one fell swoop. Rather, it’s more likely that you will incrementally mitigate risks through a series of small experiments.

Every experiment needs to be falsifiable and time-boxed

From the Scientific Method, we know that experiments need to be falsifiable (written as statements that can be clearly proven wrong) in order to clearly declare them validated or invalidated.

I additionally recommend time-boxing experiments so that even when the falsifiable hypotheses have not been met, they are still brought up periodically for review. This is to short-circuit our default tendency to wait “just a little longer” when we don’t get the results we expected.

Time is the scarcest resource in a startup.

Breakthrough insights are usually hidden within failed experiments

I find that many entrepreneurs get depressed when their experiments fall flat. They end up at a loss for what to do next or they make too drastic a course correction – justifying it as a pivot (a change in strategy).

A pivot that isn’t grounded in learning is simply a disguised “see what sticks” strategy.

Failed experiments are not only par for the course but should even be expected and embraced as gifts. At Toyota, the lack of problems is considered a problem because it’s from deep understanding of problems that true learning and continuous improvement emerges.

There is no such thing as a failed experiment, only experiments with unexpected outcomes.
- Buckminster Fuller

When an experiment fails, rather than simply declaring failure and/or using a pivot as an excuse, dig deeper instead. Search for the root cause behind the failure using techniques like 5 Whys, follow-up interviews, lifecycle messaging, etc.

There is a reason the hockey-stick curve is largely flat at the beginning. It’s not because founders are dumb or not working hard, but because uncovering a business model that works starts with lots of things that don’t.

It’s hard to be disciplined about time-boxing experiments which is why we have established a regular reporting cadence we use both with internal and external stakeholders.

Establishing a Regular Reporting Cadence

We utilize daily, weekly, and monthly standup meetings described below:

The Daily Standup
Our daily standups are structured around communicating progress on individual tasks and blocking issues. We use a separate online task board outside the Lean Stack that is broken into various sections (swim-lanes). Most tasks are directly tied back to experiments currently underway. Others are grouped more generally into sections such as bug fixes, code refactoring, writing blog posts, etc.

The Weekly Standup
Our weekly standups are structured around communicating progress on current experiments and defining new experiments. We start on the Validated Learning Board and work our way backwards from right to left. We first discuss experiments that completed (either successfully or unsuccessfully), ran past their time-box (expired), or got blocked.

Each of these discussions needs to end with a clear next action:

  • If an experiment failed, expired, or is at risk, the next action is scheduling a task to determine why. Once we determine why, the corresponding Strategy/Risks board and Lean Canvas are updated (if applicable), and a new follow-on experiment defined.
  • If an experiment passed, the next action is determining if the underlying risk we set out to mitigate was completely eliminated. If not, a follow on experiment is defined.

The conversation so far is grounded entirely on empirical learning following the additive rule of experiments.

In addition, we also spend some time discussing any recurring peripheral customer issues and/or feature requests. The level of customer pull is quickly gauged against our current key metric focus and a decision is made to either initiate a “Problem Understanding” initiative or table the issue for now.

A common trap in a startup is overcommitting one’s resources and always being in a state of motion (building too much and/or constantly fire-fighting).

“The only place that work and motion are the same thing is the zoo where people pay to see the animals move around” (not exact phrase).
– Taiichi Ohno

We instead strive to build slack into our schedule – affording us room for continuous improvement. We accomplish this using Kanban work-in-process limits on the Validated Learning Board to constrain the number of experiments we are allowed to run simultaneously. This further forces us to ruthlessly prioritize our next actions so that everything we do is additive and aligned with our current singular key metric focus.

The Monthly Standup
Our monthly standups are structured around communicating progress on the overall business. We compile macro financial and innovation accounting metrics along with a one-page progress report (lessons learned) on the previous month. This is our version of a “pivot or persevere” meeting like the kind Eric describes in his book.

The output of this meeting is also shared with our advisors whose feedback is used to inform our strategic direction.

Applying A3 Thinking

While the Lean Stack does a great job of visually communicating progress, post-it note sized summaries don’t do justice to the complexity of experiment design and progress communication.

To overcome this shortcoming, I borrowed another page from the Toyota playbook – the A3 report.

As you can probably tell by now, I am a huge fan of one-page formats.

The A3 report is a one-page format Toyota developed for solving problems, describing plans, and communicating progress. The name A3 comes from the international paper size which also happened to be the largest paper size fax machines could transmit. Nowadays, Toyota uses the more universal A4 size but the original name still stuck.

Here is what our one-page experiment report looks like:

When we commit to run an experiment, the experiment is assigned an owner (usually the initiator) who starts by filling the left hand side of the report. As the experiment progresses, data from the experiment is filled in on the right hand side. And when the experiment ends, the validated learning section is filled in with a clearly stated next action.

We use additional variations of the one-page A3 report for capturing new feature initiatives (MMFs), risks, and monthly progress (lessons learned).

I know what you’re thinking. That’s way too much process for a startup. Surely, it will get in the way of getting real work done.

Like you, I am averse to needless process. Yes, it’s way faster to iterate in your head alone, but I can tell you from first-hand experience, that it’s hard to scale the “mental leaps” approach over time and especially across a team of size greater than two.

The A3 report is less of a template and more a way of crystalizing and visualizing one’s thinking.

Like the Lean Canvas, the A3 report is deceptively inviting to create but the one-page constraint forces a level of conciseness that cuts out all the noise.

The format of the report itself is rooted in the Deming PDCA cycle (Plan-Do-Check-Act) which has lots of parallels to the Build-Measure-Learn loop and the Iteration Meta-Pattern.

A3 reports become archives for your company learning
Our goal with these reports is not just using them to crystalize current thinking but also to turn them into an accessible source for archiving learning for future use.

The ability to playback experiments through these reports not only communicates learning to new team members but also helps demonstrate the modus operandi of how we work and think.

Putting it to Practice

Last time, I described why and how we implemented the Lean Stack MVP using physical posters. I am still a huge proponent of a physical card wall. The card wall serves as an effective progress radiator (even from 20 feet away) and fosters great in-person discussion.

But the biggest challenge we have had is keeping the card wall synchronized across our geographically distributed team. We needed an online solution and tried cobbling together a solution using existing online kanban tools. But they all fell short – mainly for their lack of swimlane support.

At the end, we came up with a simple and elegant solution built using Keynote and Dropbox that far exceeded our original expectations.

The shared Keynote document holds master templates for all the Lean Stack boards and A3 reports. Adding/moving cards on the board is dead-simple through drag-and-drop. Using hyperlinks, we were able to easily build in click navigation which makes the document behave like an app when in presentation mode. But the biggest benefit was being able to capture all this within a single portable document. We named this document the “Spark59 Playbook” because it captures our Vision, Strategy, and ongoing Product Experiments all in one place.

Like the posters, we are making our playbook template available for early access along with additional tutorial video content and an invitation to participate in the evolution of Lean Stack.

Click for more details.

  • http://www.facebook.com/ryan.mettee Ryan Mettee

    This is phenomenal and eliminates waste, which startups unfortunately fall victim too. I can think of so many times in startups I’ve been involved with (even outside of technology), few things were documented, systematized, and at the end of the day, results in mental wasted energy, cash burn, and frustration.

  • http://www.facebook.com/roman.sterly Roman Sterly

    Hi Ash, the links in the beginning to parts 1 and 2 are directing to this article.

  • lulu

    Partie 1 and Partie 2 : wrong links

  • lulu