← Back to all posts

P-Values Are Lying to You (And Your Ads Budget)

Published: 2024-08-09

By: Michael Mares


P-Values Are Lying to You (And Your Ads Budget)

P-values are misunderstood, abused, and not made for digital marketing.

Let’s say you’re running an A/B test on your landing page. Variant A is the original. Variant B is your new brain child, crafted after a brainstorm fueled by cold brew and marketing memes.

You launch the test, collect data, and run a t-test.

The result?

A p-value of 0.04. You celebrate. Statistically significant! The team huddles to launch variant B across the board.

Three weeks later, conversions are down. What happened?

You got p-valued.

The P-Value Trap

A p-value tells you the probability of observing data as extreme as yours, assuming the null hypothesis is true.

That’s not the same as saying “there’s a 96% chance B is better.” It’s more like, “if B weren’t better, this result would only happen 4% of the time.”

See the issue?

  • You don’t care how surprising the result is if B doesn’t work.
  • You care about the probability that B actually performs better.

Digital marketers often mistake p-values for this second thing, and it’s easy to see why. “Statistically significant” sounds like “actually better.” But it’s not.

Also:

  • A p-value of 0.049 is significant.
  • A p-value of 0.051 is not.
  • These two outcomes are nearly identical in practice. Yet one gets you a promotion, the other gets ignored.

And that’s not even the worst part…

P-Hacking

P-Hacking: A Marketer’s Hidden Hobby

In the fast-paced world of paid search and landing pages, you’re constantly testing.

Which means:

  • You run dozens of A/B tests per month.
  • You check the results multiple times a day.
  • You stop the test when the p-value dips below 0.05.

That’s p-hacking.

And it inflates your false positive rate faster than a Black Friday CTR.

The more you peek, the more likely you’ll find “significance” by chance alone. You’re not discovering gold — you’re finding fool’s gold, statistically speaking.

So what’s the alternative?

Bayesian Methods: Marketing’s Quiet Hero

Bayesian probability flips the question:

  • Instead of “how surprising is this result if there’s no effect?”
  • It asks, “given the data, how likely is it that variant B is better?”

This is exactly what you thought p-values told you.

A Bayesian A/B test will output something like:

There’s a 91% probability that variant B has a higher conversion rate than variant A.

That’s a number your marketing team, your boss, and your grandma can understand. No mental gymnastics required. Bayesian-approach Bonus? Bayesian methods handle:

  • Sequential testing (checking results as they come in),
  • Small sample sizes,
  • Prior knowledge (e.g., you already know social proof usually works).

Real-World Impact

Imagine two variants:

  • Variant A: 8% conversion rate
  • Variant B: 9% conversion rate

You run a Bayesian test and get:

There’s an 87% probability B is better, with a 95% credible interval for the difference between 0.2% and 1.8%.

That’s enough to say:

  • B probably wins.
  • But it’s not a slam dunk.
  • Maybe wait for more data or weigh the gain vs. the cost of switching.

Compare that to a p-value test that says:

p = 0.07 — not significant.

So you do… nothing. And miss out on a possible uplift.

In Summary

  • P-values are misunderstood, abused, and not made for digital marketing.
  • Bayesian probabilities answer the question marketers actually care about: “Which variant is better, and how sure are we?
  • If you’re running A/B tests with p-values, you’re playing statistics roulette.
  • If you’re using Bayesian methods, you’re making smarter bets with your budget.

Latest Posts

View All Posts