← All writing

The Power Law of Ad Creative: Why 5% of Ads Do 95% of the Work

Meta isn't a normal distribution. It's a power law. Stop spreading budget evenly and start treating outliers like the only thing that matters.

Jordan Glickman·January 2, 2026·8
Meta Ads

The Game You Think You're Playing

Most media buyers grade themselves on average. Average ROAS, average CTR, average CPM. The whole vocabulary of the job is built on a normal distribution that doesn't exist.

Meta is a power law. Always has been. The top 5% of ads in any large account drive somewhere between 70% and 95% of the volume. Everything else is noise dressed up as a portfolio.

If you're allocating creative time, budget, and attention as if every ad has roughly equal expected value, you're playing the wrong game. You're not building an average. You're hunting an outlier.

What a Power Law Account Actually Looks Like

When we audited a $2M/month Impremis account last quarter, the numbers were textbook. Out of 184 ads live in the last 90 days:

  • 2 ads accounted for 51% of spend
  • 7 ads accounted for 78% of spend
  • 18 ads accounted for 92% of spend
  • 166 ads split the remaining 8%

That is not a failure of media buying. That is the platform working correctly.

The failure mode would have been forcing budget into the bottom 166 in the name of "diversification." Or, worse, capping the top 2 because the team was nervous about "depending on too few winners."

A power law account scares people. People want diversification because it feels safer. The math doesn't agree.

Two Inputs You Actually Control

The outcome — which ads scale — is shaped by two things you control. Everything else is the algorithm doing what algorithms do.

1. Hit rate (creative quality)

How often do new ads land in the top tier? This is mostly a function of how good your creative team is and how much risk they're willing to take.

Safe creative has a 2-3% hit rate. Bold, structurally varied creative has a 10-15% hit rate. Counterintuitively, higher hit rates than 20% usually mean the team is being too conservative — they're pumping out variations of a known winner instead of probing for the next category.

2. Capture rate (media buying)

When a winner appears, how aggressively do you scale it? This is where most accounts leave the most money on the floor. A great ad capped at $500/day is a great ad starved.

Media buyers who treat budget as a bell curve will refuse to let one ad have 40% of the daily spend. Media buyers who understand power laws will let it have 60%, 70%, 80% — whatever the algorithm wants to give it — and still keep a probe budget testing for the next one.

The 50/50 Production Mix

For accounts spending $200K+/month, here's the production allocation we've landed on at Impremis after testing it across hundreds of accounts:

| Bucket | % of new production | Hit rate target | Goal | |---|---|---|---| | Variations on proven winners | 50% | 18-22% | Extend the life of working concepts | | Bold new concepts | 40% | 8-12% | Find the next category | | Experimental / pattern-breaking | 10% | 3-5% | Hunt for the once-in-a-year breakout |

The blended hit rate target is 10-15%. If you're consistently above 20%, you're not stretching. If you're consistently below 5%, your creative team is throwing darts in the dark and needs a tighter brief.

What "Variation" Actually Means

Variations on a winner is the most misunderstood category in this whole framework.

A new color of the same hook is not a variation. It's a copy. It will not produce a meaningful new winner because Meta will treat it as the same ad with a different SKU.

A real variation:

  • Keeps the underlying insight of the winning ad
  • Changes the format, the talent, the location, or the structure
  • Tests a new hook for the same problem
  • Reframes the same identity in a new context

We had a TOFU ad scale to $9K a day for &you. The team produced 14 "variations" — same script, different actresses. None of them broke $400/day. The 15th was the same insight rebuilt as a podcast-style two-person conversation. It scaled to $11K a day and outlived the original by four months.

The insight is the asset. The execution is the experiment. Most teams have it backwards.

Why Capping Winners Is the Most Expensive Mistake

Founders cap winners because the rate of scaling makes them nervous. "What happens if it dies tomorrow?" The fear is real. The math is wrong.

If an ad is profitable at $5K/day and the algorithm wants to feed it to $15K/day, the question is not "is it sustainable?" The question is "what's the cost of not taking the volume while it's there?"

Winners always die. That's the contract. The half-life of a top performer is anywhere from 30 to 180 days. The job is to extract maximum profitable volume during the window, then have the next winner ready to take its place.

A capped winner gives you a small win and an unprepared transition. An uncapped winner gives you a giant win and the cash to fund the search for the next one.

The Real-World Result

One of the brands we took over at Impremis last year was running $150K/month in spend with a CAC 30% over target. The account had 240+ ads live. Top 3 ads were getting roughly 8% of spend each. Budget was being "distributed."

The rebuild took six weeks. The changes:

  1. Cut the library from 240 to 80, removing every middle-funnel ad with no path to scale
  2. Allowed the top 1-2% of ads to capture 50%+ of total spend
  3. Shifted production from 90% safe variations to a 50/40/10 split
  4. Removed every campaign-level budget cap

Six months later: spend at $480K/month, CAC down 41%, business sold for nine figures the following year.

The creative team didn't get more talented. The agency didn't invent a new strategy. We just stopped fighting the power law and started feeding it.

What This Looks Like Inside the Account

A few operational rules we follow:

  • No CBO at the prospecting level. Campaign budget optimization smooths spend across ad sets in ways that work against power laws. Use ABO so winners can take real share.
  • Kill ads at the bottom of the distribution fast. If an ad isn't in the top 25% of its cohort by day 5, it's not getting there.
  • Scale winners in 30-50% daily increments, not doubles. Power law winners are fragile to violent changes. Step them up, don't yank them.
  • Refresh the production calendar weekly. A power law account dies the week new creative stops shipping.
  • Do not chase smooth ROAS lines. Power law revenue is lumpy by nature. Smooth lines are a sign of capping.

Where Most Brands Get This Wrong

The most common failure mode is not creative quality. It's the operating model around the creative.

Brands brief 8 ads per week, treat them all as equally valuable, set a $500 daily cap on each, kill them at day 7 if ROAS isn't 2.5x, and then wonder why nothing scales. They've engineered a system that prevents power law winners from forming.

A winner needs three things: time, budget, and the absence of a cap. Most accounts deny it all three out of risk aversion.

For more on the metrics that actually predict creative performance, see the metrics that matter more than ROAS. For the structural reasons cold creative scales while warm creative doesn't, see why top-of-funnel ads look different.

FAQ

How many concepts should I be testing per week?

For a $250K+/month account, 6-12 net-new concepts per week, plus another 6-12 real variations on existing winners. Anything less and your hit rate doesn't have enough surface area to find the outliers.

What if I'm a smaller brand and can't produce that volume?

Produce fewer concepts but commit to bigger swings. A $30K/month account testing 2 concepts per week is fine if both are structurally different from anything you've run before. The hit rate math still works at smaller scale, just with longer cycle times.

How long do I let a new ad run before deciding?

We use a 5-day, $50-100 daily probe budget. If by day 5 the ad isn't beating the account median CPA, it's not a winner. Killing fast is how you preserve budget for the next probe.

Should I expect 85% of my ads to fail?

Yes, and that's healthy. If 50% of your ads are "working," your team isn't taking enough risk. The point of testing is not to maximize average ad performance. It's to find the outlier.

Won't allowing one ad to take 60% of spend feel risky?

It feels risky. The actual risk lives elsewhere — in the absence of new winners coming up behind it. Manage the risk by maintaining the production pipeline, not by capping the current winner.

Does the power law apply equally to TikTok and Meta?

It applies on both. TikTok's distribution is even more extreme because the algorithm is more aggressive about funneling spend to outliers. The discipline transfers; the timelines are shorter on TikTok.

What's the single biggest leading indicator that the power law is broken in my account?

Look at top-1-ad spend share. If your single best ad is taking less than 15% of total spend in a $100K+/month account, you're either capping winners or you don't have any. Both are problems.

The Bottom Line

The ad that scales doesn't look like the rest of your library. That's the whole point. A power law account is supposed to look unbalanced — that's what working looks like.

Produce widely. Probe ruthlessly. Capitalize the outlier. Refresh the pipeline.

Nothing else moves the number.

Keep reading

Pieces I've written on related topics that pair well with this one:

Subscribe to the newsletter

Get every post in your inbox.

New writing every two weeks. No fluff. Unsubscribe anytime.

Subscribe