brily
← Back to blog
Product··8 min read

NPS that means something: tying feedback to what you actually shipped

Generic NPS is a gauge chart on a dashboard. Useful NPS tells you whether the release you cut last Tuesday made users happier or sadder — and at what confidence.

B
The Brily team
Founders

NPS as a practice has a reputation problem. "On a scale of 0 to 10, how likely are you to recommend..." is mocked by engineers, treated as a quarterly chore by product, and endlessly dissected by customer success teams who disagree about whether 42 is good or bad.

The reputation problem is fair when NPS is implemented as a detached number. The reputation problem evaporates when NPS is tied to the releases and cohorts that produced it. Here's how to make that shift, and what you get when you do.

The problem with detached NPS

Most teams run NPS on a time-based schedule — an email to all active users every 60 or 90 days. The result is a score trend that moves for reasons you cannot disentangle:

  • A marketing campaign brings in a new cohort of trial users who rate you lower than long-time users. Score drops. Not your fault.
  • You ship a big feature. Existing power-users love it, new users are confused. Score moves in either direction and you can't tell from the aggregate.
  • Seasonal effects — end-of-quarter stress in B2B, pre-holiday stress in B2C — move the score predictably but meaninglessly.

The score becomes a Rorschach. Bullish leaders read it bullishly; skeptical ones read it skeptically. Nobody updates their beliefs from it.

What "tied to releases" means

The core idea: every release you deploy should be a tagged event. Every NPS response that comes in after that event gets tagged with "user was exposed to release X". You can now compare the distribution of responses from users exposed to release X against a matched sample that wasn't yet exposed.

This is not hard statistics. It's a 2-by-2 comparison with enough sample size. The mechanics:

  1. POST a release marker from CI at deploy time: POST /releases { name, timestamp, rollout_percent }.
  2. For 14-30 days after the release, every NPS response comes in tagged with the release cohort the user is in.
  3. Your NPS dashboard shows a before/after comparison with a confidence indicator. If exposed users are clearly happier or unhappier than the baseline, you'll see it.

Picking when to survey

Post-release surveys are more signal-rich than calendar surveys. The useful moments:

  • 24 hours after first exposure — captures first impressions, discovery issues.
  • 7 days after first exposure — captures week-of usage experience for daily-use products.
  • 30 days after first exposure— captures settled-in experience. Comparable against the same user's pre-release state if you have one.

Don't survey the same user at all three checkpoints for the same release — you'll fatigue them and your response rate will collapse. Split the cohort randomly across the three windows.

What to ask beyond the number

The number itself is barely useful. The free-text follow-up is where the value is. Our recommended follow-ups:

  • For promoters (9-10)— "What's the one thing that made you rate us this high?" Captures positive signal you can amplify in product and marketing.
  • For passives (7-8)— "What's the one thing that would have made this a 9 or 10?" This is the most important cohort for product improvement.
  • For detractors (0-6)— "What's the biggest reason we're not working for you?" Plus a followup asking whether you can reach out.

Do not ask more than one follow-up question. Completion rates collapse after the second question.

The segmentation you absolutely need

NPS without segmentation is a single number. NPS with good segmentation is an actionable report. The segments that consistently reveal something:

  • Plan tier — detractors concentrated in free tier is a different problem than detractors concentrated in enterprise.
  • Account age — new users rating you 5 is onboarding friction; 2-year users rating you 5 is a reliability or pricing problem.
  • Feature-flag exposure — are the users seeing experimental feature X happier or sadder than the control?

What not to do

Don't chase detractor comments for individual resolution and then ignore the pattern. Customer success reaching out to every detractor is a good tactic and a terrible strategy. The pattern across detractor comments is more valuable than the resolutions.

Don't publish the raw NPS score internally as a headline metric. Publish the trend, the segment breakdowns, and the verbatim quotes tied to release markers. Headline numbers invite comparison with public benchmarks, which invites benchmarking against companies whose NPS methodology is incomparable with yours.

Don't gate features behind NPS score thresholds. Users figure it out and then every subsequent score is corrupted.

The actual payoff

When NPS is tied to releases, three new conversations become possible in your product team. "This release made things better by 4 points, with 85% confidence." "The redesign moved passives to promoters; it didn't move detractors." "The thing we thought would be a win was neutral, and we can stop defending it."

These are the conversations that move a product team from opinion-driven to evidence-driven. The NPS score itself is a means to that end, not the end.