top of page
Search

Are you misreporting your NPS?

  • Writer: Flo Graham-Dixon
    Flo Graham-Dixon
  • Oct 17
  • 4 min read
©Juniper Strategy Ltd.
©Juniper Strategy Ltd.

We spend our lives analysing restaurant performance and brand health, particularly when working on due diligence projects - either helping a client prepare for exit or helping investors evaluate a business pre-acquisition. As a result, we come across NPS a lot.


It’s long been a stat that investors and management love, ever since it was introduced by Bain & Company in a 2003 Harvard Business Review article titled “The One Number You Need to Grow.” The idea was simple: ask customers how likely they are to recommend your brand, subtract the detractors from the promoters, and you get a single number that reflects customer advocacy. Designed as a universal indicator of loyalty and growth potential, it still holds real value when used properly. It shows up everywhere, in pitch decks, board packs, and investment reviews. In the context of diligence, it’s often positioned as a proof point of customer love - something that can help drive valuation, justify growth, and reassure stakeholders.


But here’s the issue. Over the past five years or so, we’ve found that NPS is increasingly being used incorrectly - and misunderstood - by operators and investors alike. A large part of this is because review tracking platforms that scrape ratings from Google, TripAdvisor and others have started touting “NPS” scores that have nothing to do with actual NPS.


True NPS asks a specific question: “How likely are you to recommend us to a friend or colleague?” Responses are collected on a 0–10 scale, where 9–10 are promoters, 7–8 are passives, and 0–6 are detractors. You subtract the percentage of detractors from the percentage of promoters to get the Net Promoter Score, which can theoretically range from -100 to +100. It’s a registered trademark of Bain & Company, so using the term without the correct question, scale or sampling isn’t just technically wrong, it may also be legally problematic in commercial contexts.


What are the review platforms doing wrong? These platforms repackage star ratings into a promoter (5-star), passive (4-star), and detractor (1–3-star) framework, then calculate a score and label it “NPS.” Whilst scores may be directionally useful, they are not NPS. Calling them that misleads operators, investors, and teams.

Here’s the difference:


  • Wrong question: NPS asks about recommendation likelihood, not satisfaction. You might give your recent Pret experience a 5/5 but would you recommend it? Maybe not – because it’s everywhere, and doesn’t need recommending. 

  • Wrong sample: NPS is based on structured outreach to a representative customer base. Review platforms are self-selecting – often only the happiest or angriest voices are heard. Not to mention the corruptibility of review platforms (the murky world of paid reviews etc.)

  • Wrong scale: NPS uses a 0–10 scale; online reviews use a compressed 1–5 scale. Consumers tend to round up, inflating positivity. 


So when a brand says “Our NPS is +78” and it’s based on scraped reviews, what they really mean is that 88% gave 5 stars, and 10% gave 1 to 3 stars (as an example). That’s a “net review score,” not a Net Promotor Score. Let’s call it what it is, or better yet, stick with average review scores – there’s often high correlation between them, they’re less prone to confusion and review scores are great to track and share with restaurant teams. 

If you’re using NPS, use it properly, but know that even then, it isn’t a single, comparable thing across brands. It is highly – and I mean highly – dependent on who you ask. It is obvious if you think about it:


  1. Ask people as they’re leaving a restaurant and you’ll hear from recent, local, often frequent customers with fresh impressions – NPS is typically +10 to +60

  2. Ask a nationally representative group on an online survey who’ve ever visited your brand, and you’ll capture people who came once, maybe years ago, and barely remember it. NPS will typically be very low, ranging from -30 to +20

  3. If you have a big enough budget, ask a nationally representative group on an online survey who’ve visited your brand in the last year, and you’ll get something in between say -10 to +40

  4. By contrast, forgo NPS altogether and mistakenly think your net review scores is NPS and you will typically be looking at +60 to +90!


The first three options are all valid, but they yield very different results. They can correctly be called NPS, but they are not interchangeable. The first is often used for internal brand tracking – e.g. quarterly exit surveys to monitor shifts over time. The second and third are more common in diligence or benchmarking, where you want to compare brands side by side. Within the same survey and same sample – scores are 100% comparable. Even across surveys, if you use nationally representative samples and consistent customer definitions (ever visited, visited in last 12 months, etc.), you can still draw meaningful comparisons.


This is where proper benchmarking comes in. Firms like Savanta conduct large-scale surveys where every brand is measured on the same question, same scale, same timing, same sampling method. And because the sample size is huge, they can cut it by recent visits too. That’s when you get true comparability. You don’t just see if your brand is improving – you see how it stacks up in the market.


So, at the end of the day when one operator is worried that his NPS is only +5 and another boasts of +70, it’s easy to panic. But if the +70 comes from review data and the +5 comes from a proper benchmarking survey, the real story might be the opposite. Your competitor’s NPS might be -3 in the same study, while your net review score might be +80 – you are not comparing apples with apples.


When mislabelled or incorrect NPS scores start circulating – in the press, board meetings, investment decks, team updates – it undermines trust in the metric. Operators risk misleading themselves about brand strength, fixing the wrong things, or misjudging growth potential. 


So the next time someone quotes an “NPS,” ask: Was it based on the actual NPS question? Was it collected through a representative survey? What customer group was asked – and when? Let’s keep metrics meaningful. And let’s call things what they really are.

 
 
bottom of page