How to Write an A/B Test Hypothesis (With Examples)
A step-by-step guide to writing structured, testable A/B test hypotheses using the If/Then/Because format. Includes 10 real examples across industries.
A hypothesis isn't just a guess — it's a structured prediction that makes your A/B test meaningful whether it wins or loses. Without one, a winning test tells you what happened but not why, and a losing test teaches you nothing.
The If/Then/Because Format
The gold standard for A/B test hypotheses is:
If we [make this specific change] for [this audience], then [this metric] will [increase/decrease] by [estimated amount] because [behavioral/psychological reason].
Each component serves a purpose:
- If (change): What exactly are you modifying? Be specific enough that someone else could implement it.
- For (audience): Who sees this change? Everyone, or a specific segment?
- Then (metric + direction): What measurable outcome do you expect?
- Because (rationale): Why do you believe this will work? What behavioral principle supports it?
The "because" is the most important part. It's what turns a random guess into a testable theory and what generates learning regardless of the test result.
10 Real Hypothesis Examples
E-Commerce
1. Social proof on product pages
If we add "X people are viewing this item" to product pages for returning visitors, then add-to-cart rate will increase by 12% because scarcity and social validation (Cialdini's social proof principle) create urgency and reduce purchase anxiety.
2. Simplified checkout
If we reduce checkout from 3 steps to 1 step for mobile users, then checkout completion rate will increase by 18% because reducing cognitive load and friction (Fogg Behavior Model — increasing ability) directly increases completion probability.
SaaS
3. Free trial CTA copy
If we change the CTA from "Start Free Trial" to "Start Building — Free" for first-time visitors, then trial signup rate will increase by 10% because outcome-focused language (Jobs-to-be-Done framing) is more motivating than feature-focused language.
4. Onboarding checklist
If we add a 5-step onboarding checklist for new trial users, then Day-7 activation rate will increase by 25% because the Zeigarnik effect (people remember incomplete tasks) and endowed progress (starting at 1/5 complete) drive completion behavior.
Content / Media
5. Article headline format
If we add the year to article headlines (e.g., "Best CRM Tools (2026)") for organic search visitors, then click-through rate from SERPs will increase by 15% because time-stamped content signals freshness and relevance, which increases perceived value in search results.
6. Email subject lines
If we use question-based subject lines instead of statement-based for our weekly newsletter, then open rate will increase by 8% because open loops (unanswered questions) trigger curiosity and the information gap theory.
Pricing / Monetization
7. Anchoring with annual plan
If we show the annual plan first (crossed out monthly equivalent) on the pricing page for all visitors, then annual plan selection rate will increase by 20% because price anchoring makes the annual discount feel more significant when presented as savings from a higher reference point.
8. Money-back guarantee
If we add a 30-day money-back guarantee badge next to the purchase button for first-time buyers, then purchase conversion rate will increase by 15% because loss aversion is reduced when the perceived risk of the transaction approaches zero.
Lead Generation
9. Form field reduction
If we reduce the lead form from 7 fields to 3 (name, email, company) for all landing page visitors, then form submission rate will increase by 30% because each additional form field increases cognitive load and perceived effort (Fogg Behavior Model — ability).
10. Exit-intent offer
If we show a 10% discount popup on exit intent for visitors who've spent 30+ seconds on the pricing page, then lead capture rate will increase by 12% because the endowment effect (they've already invested time evaluating) combined with a loss-framed offer ("Don't leave without your discount") increases conversion at the point of abandonment.
What Makes a Bad Hypothesis
Too vague
"If we improve the landing page, conversions will go up."
This doesn't specify what you're changing, who it affects, how much you expect it to change, or why.
No "because"
"If we make the CTA button bigger, then click rate will increase by 10%."
Without a behavioral rationale, you won't learn anything if the test fails. Why would a bigger button help? Is it a visibility issue? A Fitts's Law problem? The "because" directs your next experiment.
Unmeasurable
"If we redesign the homepage, then users will feel more trust."
"Feel more trust" isn't a metric you can measure in an A/B test. Translate it: "then signup rate will increase" or "then bounce rate will decrease."
Multiple changes
"If we change the headline, add testimonials, and remove the navigation bar..."
This is three experiments in one. If it wins, you won't know which change drove the result. Test one variable at a time, or design a multivariate test.
How to Choose the Right Metric
Your primary metric should be:
- Directly affected by the change you're making
- Measurable within your test timeframe
- Sensitive enough to detect the expected change
| Change Type | Good Primary Metric | Bad Primary Metric |
|---|---|---|
| CTA copy change | Click-through rate | Revenue (too downstream) |
| Checkout redesign | Checkout completion rate | Pageviews (too upstream) |
| Pricing page layout | Plan selection rate | NPS (can't measure in A/B) |
| Onboarding flow | Day-7 activation | Annual retention (too slow) |
Also define 1-2 guardrail metrics — things that should NOT get worse. For example, if you're testing a more aggressive upsell modal, your guardrail might be "support ticket rate should not increase by more than 5%."
From Hypothesis to Test Plan
Once you have a structured hypothesis, you need:
- Sample size calculation — how many visitors per variation?
- Test duration — how many days based on your traffic?
- Success criteria — what significance level? Any guardrails?
- Implementation spec — exactly what changes in the variant?
AB Test Plan automates this entire workflow. Describe your goals and it generates hypotheses, calculates sample sizes, and even previews the variant.
Template
Copy this template for your next experiment:
Experiment: [Name]
Hypothesis: If we [specific change] for [audience/segment],
then [metric] will [increase/decrease] by [X%]
because [behavioral principle/evidence].
Primary metric: [metric name]
Guardrail metrics: [metric 1], [metric 2]
Segment: [who sees this]
Risk level: [low/medium/high]