Ten years ago, unless you were at the forefront of startup management strategy, you didn’t really hear people talking about product-market fit.
“The only real definition I had seen at the time was Marc Andreessen’s, which was more like, ‘You’ll know it when you have it.’ Everything will be flying off the shelves,” said Sean Ellis, the author of Hacking Growth and a growth strategist who founded Qualaroo and has served in executive growth roles at Lookout, Dropbox and Eventbrite.
Ellis’ shorthand may be a bit of an oversimplification of the Netscape’s co-founder’s definition, coined in the blog post, “The Only Thing That Matters.” Still, for Ellis, who lives in Newport Beach and hosts the podcast Breakout Growth, Andreessen’s intuitive notion that “you can always feel product-market fit when it is happening. The customers are buying the product just as fast as you can make it — or usage is growing just as fast as you can add more servers,” didn’t mesh with reality.
Early in his career, as a marketing VP at Uproar and LogMeIn, Ellis recognized that product-market fit operated much more like a verb than a noun. It wasn’t enough to be “in a good market with a product that can satisfy that market,” as Andreessen described it. “We really had to iterate and tighten around a product,” Ellis said.
How would you feel if you could no longer use [this product]?”
He had success at those first two companies, which grew quickly and were sold in IPOs, but when he arrived at Xobni, he sensed his winning streak was about to run dry. To ease his nerves, he developed a series of customer satisfaction questions intended to serve two purposes: One, they could be used to assess how much loyalty Xobni’s customers — who tended to be managers — had to the product. Two, they could serve as a sniff test for the growth potential of future prospective employers, should his role at Xobni go away.
The key was in the phrasing: He deliberately left out the word “satisfaction,” because, as he told me, “a good manager is never satisfied.” Instead, he asked “How would you feel if you could no longer use [this product]?”
Four answer choices: (a) very disappointed, (b) somewhat disappointed, (c) not disappointed (it isn’t really that useful), and (d) not applicable (I no longer use [product]) became the basis for a quantitative framework to assess product-market fit.
“I just honed in on that benchmark over time.” Ellis said. “Five of the first seven businesses I worked at reached billion-dollar valuations. So, I knew what success looked like. And what I started to see was, really, every business I worked on that did well was around 40 percent or higher.”
That was the magic number — the percentage of users who took the survey, and said that they would be very disappointed without the product; it’s a benchmark Ellis said has been used by thousands of startups to determine their market viability.
Of course, his method is just one of many for calculating product-market fit, and, as he acknowledges, not necessarily the most useful after the first few months of the product journey.
Metrics for Measuring Product Market Fit
- Sean Ellis survey method: Best suited for early stage startups, this survey question asks respondents how they would feel if they could no longer use a product. Potential answers range from “very disappointed” to “I no longer use the product.”
- Cohort retention: Good for companies with a more established customer base, cohort retention rate measures the share of customers still paying for a product after a set time interval has passed — often eight weeks.
- Net promoter score: A measure of how likely users are to recommend your product to others.
- Lifetime value to customer acquisition ratio (LTV/CAC): This metric weighs how much money you spend to acquire a customer against how much money you make from them in return.
The gold standard at later stages of development, said Danielle Cohen-Shohet, CEO of GlossGenuis, a New York-based office management and payment technology provider for beauty and wellness businesses, is the cohort retention curve. The graph measures the percentage of users who continue to pay for a product eight weeks after onboarding.
Resembling a hockey stick, the downward-sloping curve should flatline above zero — ideally, at or above 6 to 20 percent of users, according to Mixpanel’s 2019 Product Benchmarks Report — otherwise you’re paying more to acquire your users than they’re paying you to keep them.
Then there’s the net promoter score, Cohen Shohet said, which measures the likelihood core users are to recommend your product to others. Does your product have the support of communities of influence who can propel its trajectory?
The lifetime value to customer acquisition cost ratio, arguably the most revenue-driven framework, is a measure of how much you make from a customer relative to how much you spend to get one.
The point is, assessing market viability is complicated. We asked a simple question: Which method is best? And we got a simple answer: it depends. What it depends on, however — the size, stage, cash reserves and growth ambitions of the business — is enlightening.
In response to an interview request, Maegan Lujan, director of solutions and services at Toshiba America Business Solutions, Inc., conducted a LinkedIn poll that generated responses from technical experts across 14 different companies. Here’s what they had to say when asked which tool is most valuable for determining product-market fit:
- The retention curve: 43 percent
- Net promoter score: 36 percent
- LTV/CAC ratio: 21 percent
We spoke with several industry experts to explain the benefits and drawbacks of each.
THE SEAN ELLIS SURVEY METHOD
Ellis said the reason his survey is effective is because it identifies a company’s core customers, so firms can build accurate user personas and marketing campaigns around them. As reported by Rahul Vohra, the founder and CEO of Superhuman in First Round, “when Hiten Shah posed Ellis’ question to 731 Slack users in a 2015 open research project, 51 percent of these users responded that they would be very disappointed without Slack.”
Evidence the software tool had product-market fit was borne out in Slack’s later success.
But even for companies that don’t initially meet the 40 percent threshold, the results of the survey are instructive. When Ellis was working for San Francisco-based Lookout, only 7 percent of survey respondents said they would be very disappointed if they could no longer use the product.
In light of the survey, Ellis said, the mobile security company quickly shifted its business model. It had initially placed equal value on five or six different sets of mobile apps and security software products, but it was the antivirus software that drew the most enthusiasm.
“By shining a spotlight on that, we were able to set the right expectations for what the product was able to truly deliver,” Ellis said. “By the time I left that company, they were at 60 percent. And within three years, they came out with a billion-dollar-plus valuation.”
There are rules to administering the survey, he cautions. It’s important to filter out users who register, but never use your product. Ideally, the survey should be administered to 40 users who have interacted with the product at least twice in two weeks (though this depends on the product cycle; loyal Airbnb users, for instance, don’t use the service every two weeks).
“The simplest way that I’ve done it is just, literally, pull an email list of those people and send an email survey to them. But I’ve also run it in inflow surveys embedded in the product. Mobile users tend not to respond to email very well. So then, a lot of times, you do need to prompt it in a mobile app,” Ellis said.
Whenever new customers arrive on the market, or a company expands internationally, Ellis said, it’s a good time to refresh the data, with surveys distributed about once a month until you reach product-market fit. Once you’re above the 40 percent mark, surveys can be sent once every six months.
The benefit of surveys versus retention cohorts is that you can use them to understand everything about the users.”
Being conscientious about repeating the survey is important, though, as product market-fit is perishable. Marco Perry, founder of the Brooklyn-based product design consultancy PENSA, points to the widespread availability of voice recording features native to the Evernote app as a cautionary tale of how differentiating product features can be swallowed by copycats.
“A lot of the great features that are available in Evernote are now coming for free in the latest updates to iOS or Android,” Perry said. “So their competition is the operating system.”
One of the advantages of Ellis’s survey is its speed of delivery; it can be easily administered through Typeform or similar tools. Another strength, according to Ellis, is its depth: “The benefit of surveys versus retention cohorts is that you can use them to understand everything about the users. ‘Who are they? What were they using before? Why did they decide to try the product?’” Ellis said.
COHORT RETENTION RATE
Cohort retention rate has been widely touted by industry insiders as an indicator of product-market fit. Yet, the formula requires data most startups don’t have access to in their first months, Ellis said. If you do have the data, however, like GlossGenius does, the cohort retention curve can be telling.
“We think of product-market fit as this: when the product is really good and we can attract paying customers organically, primarily through word of mouth and referrals, and shift the focus from improving the product to growing distribution channels,” Cohen-Shohet said. “When we look at what builds scalable acquisition channels, it’s having a strong cohort retention rate.”
We think of product-market fit as when the product is really good and we can attract paying customers organically.”
The rate measures the proportion of active users who continue to use a product after a set period of time: typically 14 days. If the curve tapers to a straight line above the X axis, it signals a product-market fit.
At GlossGenius, the metric is based on a group of people who download the app during the same 30-day interval, Cohen-Shohet said. Because their experiences with onboarding, product interaction and customer service are roughly equivalent, they are evaluated as a group. New cohorts are introduced each month, and insights from the customer experience team, as well as analytics software such as Mixpanel’s, help improve retention over time.
Cohen-Shohet said the retention curve tends to be less biased than survey methods: “You don’t have response bias, right?” she said. “You’re getting information from all users, not just ones that have enough time and intent to fill out a survey. You’re getting full life-cycle data, not data from a specific survey at a specific point in time,” she said.
How high above the X axis the retention curve needs to flatline to indicate product-market fit is murkier. According to a report from Mixpanel, “most apps and software have a 6 to 20 percent eight-week retention rate. For products in the media or finance industry, an eight-week retention rate over 25 percent is considered elite. For SaaS and e-commerce industries, over 35 percent retention is considered elite.”
Based on analysis of Quettra’s usage statistics from over 125 million Android mobile phones, Andrew Chen, a general partner at Andreessen Horowitz and former head of rider growth at Uber, reports on his blog that the top 10 apps had a 60-day retention rate of 55 percent, the next 50 had a 60-day retention rate of 40 percent and the next 100 had a 60-day retention rate of 21 percent.
But here’s where the analysis gets interesting: the next 5,000 apps had a 60-day retention rate of 11 percent, and the average of all apps assessed at 60 days was just 7 percent.
Yet, even at that level, “you probably have some sign of product-market fit,” Ellis said. “It’s similar to the 7 percent we found at Lookout [using the Sean Ellis survey]. You just want to study that [cohort] and understand who they are.”
Analytics software can go a long way to do that, according to Perry, by assessing where in the mobile experience problems are arising.
“If somebody is onboarding on your app for the first time, and they drop out early in the process, you have to find out why,” he said. “‘Are they getting confused? Do they not find value in the time they’re spending? Are you asking for a credit card when they expected a free product? Analytics is very good at optimizing those tactical issues. And, ultimately, some apps win or lose by those things. Like, how fast is it to use Instagram versus its competitors?”
NET PROMOTER SCORE
If the cohort retention rate measures users’ willingness to stick with your product, the net promoter score is an indication of how likely they are to recommend it to others. It’s a good supplement to the cohort retention rate, Cohen-Shohet said, but, ultimately, a weaker signal of product-market fit.
“It’s what I would describe as the emotional response people have to your product. Do they really love it? Will they refer your product to their friends and tell people about it? Or are they not going to tell people because it’s like, meh.”
The score can be tallied with a single question, according to a report on Survey Monkey’s website: “How likely is it that you would recommend (insert company or product/service) to a friend or colleague?”
It’s what I would describe as the emotional response people have to your product. Do they really love it?”
People who rate the product six or lower are called “detractors,” the report states. Those who give the product a seven or eight are called “passives” and respondents who select a nine or 10 are “promoters.” These responses are then plugged into the formula below for a score.
NPS = percentage of promoters – percentage of detractors.
According to Survey Monkey global benchmark data, among tech companies, a score from zero to 11 falls in the bottom quarter, the median net promoter score is 40, and the top quarter is 64 and above.
The San Diego-based online grocery shopping and delivery company Mercato scores around 70, said Mike Mason, a product manager at the company. But Mercato actually uses the metric to diagnose service gaps, not to assess product-market fit.
“We’ve never really seen a huge increase in the NPS just based on a feature we added,” Mason said. “It can be misleading, too, because you might have exceptional customer service but not a great product, or vice versa. It doesn’t tell the full story.”
Another issue is that people completing the survey might be end users — not the people paying for the service. That’s according to Alex Willen, who spent 10 years as a product manager at early stage enterprise SaaS startups such as Box, Blue Jeans Network and Talkdesk, before moving to San Diego to start a dog treat company, Cooper’s Treats, which he said recently achieved product-market fit.
“The reality is 90 percent of end users can hate the product, but the actual decision-maker will still renew,” Willen wrote. “Everybody hates using Salesforce but nobody churns. If you’re measuring the actual decision-makers, you’ll likely get a better idea of your product’s value to customers, but too late to be useful.”
But while the net promoter score may not tell you whether you have product-market fit, it’s a useful measure of a company’s customer experience and perceived ethos, Ellis said.
“Sometimes you can have an awesome product-market fit, but you’re a terrible business to do business with,” he said. “‘Last time I needed customer service, it took five days for the company to get back.’ So your net promoter score drops. It’s much more a reflection of the touch points in the business.”
CUSTOMER LIFETIME VALUE (LTV/CAC RATIO)
A report in Klipfolio describes one of the most powerful indicators of product-market fit — the customer lifetime value metric — by this equation.
(LTV) = Gross Margin % X Avg. Monthly Payment / Churn Rate
/
(CAC) = Sales and Marketing Costs / New Customers Won
This might seem like a straightforward metric until you start to unpack marketing acquisition costs and define what it means to be an acquired customer. The Austin-based company Joust, for instance, arrived at its customer lifetime value by scraping data from an AI-powered risk assessment tool it uses for underwriting freelancer contracts, said Lamine Zarrad, the company’s CEO.
A user who files two invoices per month, at approximately $2,000 each, is considered an acquired customer unlikely to churn.
They began, he told me, by identifying the minimum viable product: in this case, an app that allows self-employed workers to accept credit cards or direct deposits for rendered services or products. Payment to freelancers is stored in an FDIC-insured bank account. A subsequent feature, PayArmour — for a 1 percent no-interest fee — ensures freelancers will get paid for invoices within 30 days. Same-day payment can be delivered for a 6 percent no-interest fee.
Using machine learning, Joust runs a series of multivariable tests on registered users: “Like any other bank, we ask for your name, your social security number, your address, your telephone, your email and so forth. And then we look at those attributes individually, and run models on these attributes as they relate to each other,” Zarrad said.
The system tracks user behaviors and some 1,300 credit attributes to create customer profiles and assess fraud risk. “And this is where we derive a lot of intel,” Zarrad said. For instance, “when you populate your tool within the first 24 hours with contacts, it is a signal of a good customer and positive behaviors,” Zarrad said.
The algorithm has also revealed a back door to calculating the LTV/CAC ratio: A user who files two invoices per month, at approximately $2,000 each, is considered an acquired customer unlikely to churn. Using this method, they have a LTV:CAC ratio above 3:1, an industry benchmark for successful growth that Profitwell reports means “each acquired user is worth 3x what you paid to earn their business.”
GO-TO-MARKET
Some entrepreneurial-minded skeptics, like Willen, view all these methods as lagging indicators of product-market fit: a case of the chicken and the egg.
“By the time you have enough customers to use these methods,” he wrote, “you’ve probably already signed enough deals to indicate that you do have product-market fit.”
So what do you do if, like Willen, you’re trying to launch an online dog treat business on a shoestring budget?
“There’s no better way to know if people want your product than if they’re willing to pay for it — or use it, visit it, or whatever else you’re hoping people will do,” he wrote.
Willen already had the product developed, and it was cheap to produce. For about $2,000, he was able to get packaging designed on 99designs, set up a Shopify site, do a small production run and start advertising on Google and Instagram.
There’s no better way to know if people want your product than if they’re willing to pay for it.”
If going to market without knowing you have product-market fit is prohibitively expensive or time-consuming, according to Willen, you should be inventive — find a hack that’s a cheaper way to judge people’s intent to purchase your product. For a consumer app, that might mean running ads for it and seeing if people click on them. Say you’ve got a wild idea for on-demand llama delivery. If you want to know if it will work, put up some Facebook ads and test the water.
Said Willen: “Of course, in that case people might just click out of amusement, so the further you can go to judge actual intent, the better. If you put up a landing page asking people for their email addresses to find out when llamas will be available in your city, that will give you a better signal of intent,” he said.
In essence, there’s a tradeoff between the upfront time and cost investment you make and how strong of a product-market fit gauge you’ll get in return. If you fully build and launch your product, you’ll have a nearly irrefutable determination — the product sells well or it doesn’t — but at a high cost. If you send a quick survey to people in your network asking if they’d be interested in your product, you’re not going to get a great signal of product-market fit, but your cost will be low.
The key, according to Willen, is to find the right balance of cost versus signal quality. In some cases you might need to launch a scaled-down version of your app, where in others you might do well building a deck that lays out the product and value proposition for people in your target audience.
“With my dog treat business, the cost of launching was so low that it made sense to go ahead and just launch, but with more complex enterprise software, it’s too risky to launch without having made some effort to determine product-market fit first,” he said.