In The Lean Startup, Eric Ries calls cohort-based reports “the gold standard of learning metrics.”
Let’s explore why the creator of a system that has attracted thousands of adherents around the world places so much value on cohort analysis.
According to The Lean Startup, in the early stages there is little that founders can say they know for sure about their business. They have an idea for a new product that they think people will pay for. But they don’t know that people will value it, or which types of people will value it, which aspects of it they will value, or whether they’ll value it enough to pay for it. A startup’s early days are full of uncertainty and untested assumptions. And because every startup has a limited amount of time before it must either turn a profit or die trying, The Lean Startup philosophy believes that startups must be finely-tuned learning machines dedicated to testing those assumptions continuously.
In The Lean Startup world, the goal of every iteration of work should be to design and execute a test that confirms or rejects a hypothesis as quickly as possible. This is called the Build-Measure-Learn loop.
Almost all startups are good at building things. Unfortunately, this can be their downfall. It’s easy for startups to fall into the trap of believing that if they just continue to iterate on the product, eventually paying customers will come flooding in. Some put all of their attention on the Build step, and don’t even consistently measure anything at all.
Most startups avoid that mistake and are also good at collecting data and generating metrics. There are plenty of analytics tools that will generate a snippet of code that developers can plug into the product, and–Presto!–you’re measuring things. A dashboard of colorful charts and tables is produced every day providing a multitude of stats about the business.
All too often, however, this is where startups who don’t follow the Lean philosophy make a key mistake. They fall victim to tracking what The Lean Startup calls Vanity Metrics: aggregate numbers like total visits, page views, registrations, events, etc. that tend to increase over time and give the impression that the business is succeeding even if it really isn’t. With the numbers going up and to the right📈, it’s easy to justify each additional iteration of engineering and delude yourself into believing that success is just around the corner.
Tracking vanity metrics breaks the Learn portion of the Build-Measure-Learn loop. With a broken Learn step, startups never reject any of their assumptions. When they ship things customers don’t care about–wasting precious time and energy that could be used building something customers would care about–they don’t realize it. The aggregate metrics they’re tracking mask the impact of their work. If a dip in numbers does occur, it is easily attributed to something other than the startup’s hard work. (Or worse yet, internal groups will each blame each other.) The startup will go into the next Build iteration without any new knowledge or insight about how the product is working for customers.
In order to have a properly working Learn step, you must gather and analyze actionable and accessible metrics:
- Actionable metrics: show a clear cause-and-effect relationship between things the startup is doing and things customers are doing as a result.
- Accessible metrics: are simple enough to be understood by the entire organization.
Cohort-based reports check both of these boxes.
Cohort-based reports measure the key actions that specific groups of customers perform over time. One of the most useful ways of grouping customers is by the week or month that they were acquired. This provides a natural way to track their key actions over time and compare them to other cohorts acquired earlier or later.
Here’s an example cohort analysis for a hypothetical startup whose key customer actions are signing up, saving a profile, and making a purchase:
Rather than showing aggregate values, this graph shows the rates at which new customers who signed up in each month completed the key actions. We can see that the rate at which new customers complete profiles is increasing at about 3% per month, but this increase is not translating to more purchases.
When combined with split-testing (aka A/B testing), which Lean startups also love, cohort-based reports can even compare how a group of people receiving a specific product enhancement within a cohort performs against those not receiving the change. This helps further isolate against other factors such as seasonality of the business or changes to sales or marketing campaigns.
Today there are more analytics packages than ever before, and each one is racing every day to add more widgets, panels and dashboards. But just because you have more data does not mean you have more insight. It’s easy to get overwhelmed by the sheer number of reports, or to even not understand the meaning of the metrics being presented to you. The definitions around the things a report is tracking can be hard to keep straight.
Cohort-based reports are easy for everyone in the organization to understand because they report on a metric we can all identify with: people.
Each chart reflects the actions that a real person or group of real people took with your product. Everyone in the company should not only be able to identify with that, but tracking and improving the rate at which people move through your application should get them excited, because it directly translates to more engaged and successful customers.
Lean startups worship Cohort-based reports because they provide a highly effective way to test hypotheses about the business, learn what’s working and what’s not, and translate that learning into course corrections on the path to sustainable growth.