Your Creatives Think the Buyer Is Killing Their Best Work
The fight between creative teams and media buyers is the most expensive unresolved conflict in paid media. Here is how to settle it and reclaim lost margin.
The argument every Friday
The creative team sends over fourteen new ads on Monday. By Thursday, eleven of them are paused. Friday afternoon, the creative director slacks the head of media: "You didn't even give them a chance." The buyer responds with screenshots of $300 CPAs. Both think the other one is the reason the account isn't growing.
I have watched this exact fight play out at probably forty companies. It is the single most expensive unresolved conflict inside paid media. And almost every time, both sides are partly right and partly wrong, and neither one knows where the line actually is.
Why this fight is structural, not personal
The instinct is to chalk it up to personality. Creatives are precious about their work. Buyers are mercenary about their numbers. Hire nicer people, settle the fight.
That's wrong.
The conflict is structural. Creative teams are graded on whether their ads work. Buyers are graded on whether the account performs this week. The first metric needs patience. The second metric needs decisiveness. Those two job descriptions are pointed in opposite directions, and no amount of team-building solves it.
What solves it is a shared framework for what counts as a fair test, what counts as a real loser, and what the realistic hit rate actually is.
This post is that framework.
Step 1: Make sure the buyer is actually giving ads a chance
Before I defend the buyer, I want to defend the creative. About 60% of the time I run this audit, the buyer is killing ads too early. Not maliciously. Just by reflex.
There are five tests I run.
Test 1 — Spend before kill
How much did the ad actually spend before it got paused? Pull the data. Look at the median.
With CPAs in the $40-100 range you need somewhere around 5 to 10 conversions before you can say anything statistically. That's roughly $200-1,000 of spend. If the median ad in your account got killed at $150 of spend, your buyer is making decisions on noise. They have not measured the ad. They have measured a coin flip.
At Impremis we instrument this directly — every kill writes the ad's lifetime spend, conversions, and time-on-platform to a log. Our floor is $300 of spend or 7 days, whichever comes first, before any non-emergency kill. That floor alone has lifted hit rates 3-5 points on multiple accounts.
Test 2 — Calendar time
Most ads need 3 to 7 days to find their audience, especially if the angle is genuinely new. Brand-new concepts can have ugly day-1 numbers and clean up dramatically by day 5 as the algorithm finds the right delivery.
If your buyer is killing on day 2, they are killing concepts before the algorithm has even decided who to show them to.
Test 3 — The retest
This one settles arguments fast. Take ten ads the buyer killed in the last 30 days that had decent leading indicators — low CPC, healthy add-to-cart, reasonable thumbstop — and relaunch them at low budget into a fresh ad set. Just $50/day each.
If 1 or 2 of them suddenly produce, your buyer's kill rules are too aggressive and you can prove it with data, not vibes.
Test 4 — Confidence sequencing
Buyers trust certain creators, formats, and angles because those have produced winners before. New stuff feels like throwing money in a hole.
The fix is to interleave. Every batch of new tests should mix high-confidence launches (proven format, proven angle, fresh execution) with low-confidence launches (genuinely new concepts). Proven angles win at maybe 20%. Net-new concepts win at maybe 8-10%. If you batch them separately the buyer will look at the new-concept batch in isolation and panic at the low hit rate. Mix them and the math behaves.
Test 5 — Pre-filtered creative
If an ad already has organic traction — a UGC clip that performed on TikTok organically, a customer review with high engagement, an angle that worked on email — it deserves more rope. The buyer should know which assets carry that pre-validation and weight kill decisions accordingly.
Most teams don't tag this. They should.
Step 2: Defend the buyer (because they're often right too)
Now I'll flip it. The other 40% of the time, the creative team is the problem and they don't see it.
Here are the three patterns I see most.
Pattern 1 — The bar is rising and nobody told the creatives
Six months ago a $60 CPA on a new ad was a winner. Today the same ad would lose. Not because the ad got worse. Because the winners in the account got better, and now the bar that any new ad has to clear is set by the best creative in the account, not by some absolute standard.
Creatives often interpret this as the buyer changing the rules. The buyer didn't change the rules. The account got better and the standard rose with it. That's success. It needs to be communicated as success.
I tell creative teams: if you're producing ads that would have been winners 12 months ago and are losers now, your craft has not regressed. The ceiling has lifted. Different problem, different conversation.
Pattern 2 — Power-law math doesn't match craftsperson math
A designer takes pride in 9 out of 10 of their executions being usable. A media buyer running paid social knows that 8 out of 10 of any batch of ads will lose money. This is not a defect. This is how the medium works.
Meta is a power-law system. Roughly 1-2% of your ads will produce 40-60% of your spend. The other 80-90% subsidize discovery of those winners.
| Reality of paid social creative | What people expect | What's actually true | |---|---|---| | Hit rate on net-new concepts | 50%+ | 8-12% | | Hit rate on iterations of winners | 70%+ | 20-30% | | % of spend going to top concept | 10-20% | 30-50% | | % of total ads that drive 50% of spend | 25-30% | 1-2% | | Ads that lose money | The bad ones | Most of them, by design |
If you walk into a creative review and 8 out of 10 ads got killed, that is not a creative team failure. That is the medium operating normally. A creative team failing is when zero out of forty new ads break through, or when iterations of a winner stop producing iterations.
Pattern 3 — The similarity tax
The sneaky one. A creative team produces 40 ads in a month. Of those, 35 are minor variations on the same hook, the same creator, and the same angle.
Meta's algorithm sees them as essentially one ad with 35 entries. The first one to get traction becomes the placeholder for the entire concept and the others get suppressed regardless of execution quality. The buyer kills 33 of them. The creative team is furious.
The rule I use: if two ads can be described in the same sentence, they are the same ad. A b-roll swap is not a new ad. A different intro line is not a new ad. A new creator delivering the same script in a different tone is borderline. A genuinely different angle, hook, or format is a new ad.
This one piece of vocabulary saves a lot of relationships.
The shared framework: what counts as a fair test
At Impremis we settle this fight up front, in writing, before any creative gets briefed. Every account has a one-page document that the creative lead and the media lead both sign.
It covers six things.
- Minimum spend before kill. Specific dollar amount. No exceptions outside of policy violations or off-brand content.
- Minimum days before kill for new-angle creatives. Usually 5-7.
- What constitutes a "new concept" vs. a variation. Specific examples from this account.
- Realistic hit rates by category — net-new concepts, iterations, format changes — so nobody's surprised.
- The retest protocol. What gets resurrected, on what budget, and how often.
- The kill log. Every kill writes a row. Once a quarter we audit the log against retest results.
This sounds like overhead. It is the cheapest insurance you will ever buy. The cost of getting this wrong is months of internal politics, two or three creative-team resignations, and a buyer who has stopped trusting your briefs.
Diagnose your account in 20 minutes
Run this on any account where the creative-buyer relationship is tense.
- Pull every paused ad from the last 30 days. Calculate median spend at kill. If it's under $250, your buyer is killing on noise.
- Calculate the percentage of total spend going to the top 3 ads. If it's under 25%, your account is too flat — you don't have real winners. If it's over 70%, you have a fragility problem.
- Tag the last 50 launched ads as "variation" or "new concept." What's the ratio? If new-concept launches are below 20% of volume, your creative team is iterating too narrowly.
- Pick 10 killed ads with strong leading indicators and relaunch them. What percent come back? More than 10% means kill rules are too tight.
- Compare hit rate by creative category. Are certain creators, formats, or angles producing at much higher rates? Reallocate the brief mix.
If you can't answer these five questions, you don't have a creative-vs-buying problem. You have a measurement problem masquerading as a personality problem.
What I tell founders in the room
Stop trying to make your creative team and your media buyer like each other. Make them agree on a framework. Liking comes after, sometimes. Agreement is what you actually need.
The healthiest creative-buying relationships I've seen aren't friendships. They're contracts. Both sides know the rules of engagement. Both sides know what counts as a fair fight. Both sides have a shared vocabulary for what "a real loser" means.
When the framework is in place, the Friday argument disappears. Not because everyone agrees — because everyone is now arguing about the interesting stuff (which angle to test next, how to structure a hook batch) instead of the boring procedural stuff (was that ad killed too early, was that variation actually a new concept).
FAQ
How do I get my buyer to stop killing ads on day 2?
Write the kill rule down. Both of you sign it. Then put a Slack alert on every kill that fires below the floor — public to the channel. Public visibility solves about 80% of premature-kill problems within two weeks because no buyer wants to be the one publicly violating the rule they signed.
What's a realistic hit rate I should expect?
For net-new concepts, plan around 8-12%. For iterations of an existing winner, 20-30%. For pure variations (b-roll swap, hook rewrite on a proven format), 40-50%. If your buyers and creatives both internalize these numbers, half the conflict evaporates.
My creative team is upset that I have a daily kill rule. Is that wrong?
The rule isn't the problem. The threshold is the problem. Daily review is fine. Killing at $80 of spend with no other indicators is not. Move the floor and the daily cadence stops feeling violent.
What if my account is genuinely too small for this math to work?
Under roughly $30K/month of spend, the numbers don't have enough volume to behave like a power law and your hit rates will swing wildly. The framework still applies, you just need wider tolerance and longer windows. The temptation at small spend is to declare every ad a winner or loser within 24 hours. Resist it.
How do I handle a creative team that won't produce truly new concepts?
This is usually a brief problem, not a creative problem. If briefs ask for "more like the winner," you'll get variations. If briefs ask for "a completely different angle solving a different objection," you'll get new concepts. Audit your last 20 briefs. Most of them are probably variation briefs in disguise.
Should the buyer be allowed to brief creative directly?
Yes, with a creative lead in the room. Buyers see signal the creative team can't — what's converting cold traffic, what's getting cheap CPMs, what angles competitors are pushing. Pretending they shouldn't have input is how you end up with a beautiful account that doesn't perform.
How does this scale to a $1M+/month account?
The principles are identical, the volume just makes everything more legible. At very high spend, you can split-test the kill rules themselves, running tighter rules on one ad set and looser rules on another. But that's a luxury. Most accounts under $500K/month should focus on getting the basic framework written and signed.
Creative teams don't lose because their work was bad. Buyers don't lose because they're impatient. Both teams lose because nobody wrote down the rules of the game and nobody agreed on what a real test looks like.
Write it down. Sign it. Audit it quarterly.
For more on the metrics underneath these fights, read why ROAS isn't the goal. For the workflow that supports this kind of weekly account review, see building your attribution stack.
Keep reading
Pieces I've written on related topics that pair well with this one:
- I Don't Analyze Losing Ads. Here's the System I Use Instead — Most creative analysis is just noise wearing a lab coat. The system I run on $250M+ in spend only mines winners, not individual losers.
- Angle Mapping: The Pre-Production Framework That Cuts Creative Waste — Most creative waste happens before production. Here's the angle mapping process that identifies which territories are worth testing before any brief i…
- The Creative Testing System That Produces Real Winners — A test is an experiment. A system is the infrastructure that surfaces winners and compounds the learning.
- How to Build a High-Output Creative Team Without 15 People — A systems-first approach to building scalable creative teams for agencies using lean hiring, contractor networks, and structured production workflows.
- The Scroll-Stop Audit: Diagnosing Why Creative Doesn't Convert — Learn how to diagnose creative performance using the Scroll-Stop Audit framework to identify where ads fail and systematically improve hooks and conve…