The Pricing Power Test: You're Undercharging. Here's the Proof.
The reality is that most SaaS founders have never actually tested their pricing. They picked a number at launch — often based on what a competitor charges, what felt "reasonable," or what their first advisor suggested over coffee — and they've left it there. Maybe they ran a 10% bump once and held their breath. That's not pricing strategy. That's hope.
And here's what I've noticed: founders who haven't raised prices in 18 months almost always have the same tell. When I ask them why they haven't, they say some version of "we didn't want to risk losing customers." What they don't realize is that answer already contains the diagnosis. You don't fear losing customers over a price increase if you believe your product is the obvious, irreplaceable choice. That fear is a positioning signal, not a pricing signal.
So before you touch your pricing page, let's run the test.
The Three Questions That Expose Whether You Have a Pricing Problem
I call this the Pricing Power Diagnostic. Three questions. If you can't answer any of them with actual data, you don't have a pricing strategy — you have a number.
Question one: What is your logo churn by tier?
Not total churn. Not revenue churn. Logo churn, segmented by the tier the customer was on when they churned. If you're churning logos faster on your entry tier than your mid tier, you're likely underpriced at entry — which means you're attracting price-sensitive customers who will leave the moment something cheaper appears. You've built a discount club.
Question two: What is your expansion MRR as a percentage of new MRR?
OpenView's Product Benchmarks report (2023, the most recent edition available) puts the top quartile at 30% or higher — meaning for every dollar of new MRR from new logos, they're generating $0.30 or more from expansion in existing accounts. If your expansion MRR is near zero, you either have a product depth problem or a packaging problem. You have no upsell architecture. Pricing power isn't just what you charge at acquisition — it's what your customers are willing to pay you as they grow.
Question three: When did you last run a willingness-to-pay test?
Not a survey asking what people would pay. Those tend to be fiction — customers reliably underreport in practice, often by significant margins. A real WTP test involves cohorted pricing experiments, Van Westendorp methodology, or direct customer interviews asking not "what would you pay?" but "at what price would you start to doubt the value?" and "at what price would you think it was too cheap to take seriously?" If you've never done this, you're flying without instruments.
CAC Elasticity: The Test That Exposes Who You're Actually Building For
Here's the one most founders skip entirely, and it's the one that changes how you see everything.
Pull your CAC — fully loaded, including sales salaries, marketing spend, and SDR time — and segment it by the price tier of the customer acquired. What you're looking for is this: if your cheapest tier costs you more to acquire than your mid tier, your pricing architecture is subsidizing the wrong customers.
This happens more than you think. It happens because the founder set entry pricing to reduce friction, so they attract high-volume, low-intent buyers who take eight discovery calls and still churn in month three. Meanwhile, the mid-tier customers — who are buying for a specific operational reason and have budget allocated — convert faster, require less hand-holding, and stay longer.
If your CAC elasticity analysis shows an inverted curve — more expensive to acquire cheaper customers — you are not running a growth strategy. You're running a leaky bucket with a marketing budget poured into it.
The fix is often not raising the entry price. It's restructuring who the entry tier is for — which is a packaging conversation, not a pricing conversation.
Why the 10% Test Is a Blunt Instrument
The "raise prices 10% and see who churns" approach gets talked about in SaaS forums like it's sophisticated. It's better than doing nothing. It is not sophisticated.
Here's the problem: a 10% test with no cohort control tells you almost nothing actionable. If you raise prices for all customers simultaneously, you can't isolate whether churn is driven by price sensitivity, product dissatisfaction that the price increase simply accelerated, or seasonal patterns. You've introduced one variable while ignoring six others.
Real pricing power testing requires three things:
A cohort-controlled experiment. Hold pricing stable for existing customers; test new pricing on new cohorts for 60 to 90 days. This is the only way to get signal without contaminating your existing book.
A defined willingness-to-pay ceiling. Before you run the experiment, interview your 10 best customers — highest LTV, lowest support burden, strongest advocacy — and ask them at what price they would have gone to a competitor instead. You will discover that your ceiling is almost always higher than you assumed.
Churn attribution tagging. If you do lose accounts, have a mechanism to capture whether the stated reason was price, feature gaps, a competitive alternative, or something else entirely. Price-attributed churn and satisfaction-attributed churn are different problems. Conflating them produces useless data.
Without these three elements, you're not testing pricing power. You're just watching what happens and calling it strategy.
When the Diagnostic Reveals a Positioning Problem
This is where it gets uncomfortable.
If you run the diagnostic and the real issue is that your customers don't fully understand what your product does or who it's for — you cannot solve that with a price change. A price increase on a confusingly positioned product doesn't communicate confidence. It communicates confusion at a higher price point.
I lived this with my logistics SaaS. We were charging $299 per month. The product was the same product we eventually charged $1,800 per month for. What changed was not the code. It was the packaging, the positioning, and who we explicitly said the tier was for.
At $299, we described it as "route optimization software for small fleets." At $1,800, we described it as "operations infrastructure for logistics companies managing 10+ vehicles that need real-time compliance tracking and driver performance data." Same product. Different customer. Different frame.
The $1,800 customer didn't blink at the price because they understood exactly what operational problem they were buying a solution to. The $299 customer was shopping for cheap software. Those are different buyers, and they respond to pricing differently.
The reframe didn't require a product change. It required us to stop trying to be affordable and start trying to be precise. That's my experience, and it aligns with what ProfitWell's published WTP research has shown repeatedly: perceived value — not feature count — is the primary driver of price sensitivity. When buyers understand the outcome they're purchasing, price resistance tends to drop. That's not a pricing insight. That's a positioning insight.
The Three-Move Sequence for This Week
If you're heading into a Q1 board meeting with flat NRR and you haven't touched pricing in 18 months, this is an emergency. Your board is already asking the question internally — the only variable is whether you show up with a diagnosis or they deliver one to you.
Move one: Pull your logo churn rate segmented by tier. If you don't have this in your analytics stack, build the query today. You need this number before you can have an honest conversation about pricing architecture. Logo churn by tier is the foundational data point. Everything else follows from it.
Move two: Pull CAC by cohort tier. Fully loaded — sales, marketing, onboarding cost, everything. If your entry tier is more expensive to acquire than your growth tier, you have a structural problem that a price increase won't fix. You have a targeting problem dressed up as a pricing problem.
Move three: Schedule willingness-to-pay interviews with your 10 best customers before you touch the pricing page. Not a survey. A 30-minute conversation. Ask them: at what price would they have paused before buying? At what price would they have questioned whether the product was serious? You will learn more in those 10 conversations than from six months of A/B testing.
The board isn't going to ask you why you raised prices. They're going to ask why it took you this long.
Pricing is not a number. It is a diagnostic signal — a real-time read on whether your market understands your value, whether your packaging serves your best customers, and whether your go-to-market motion is attracting buyers or browsers.
If you haven't raised prices and not a single customer has asked why you haven't, pay attention to that. It means they're not thinking about your pricing at all. They might not be thinking about your product at all. That is not a pricing emergency. That is a positioning emergency.
Audit your CAC by tier. Now.
