← All writing

What Scaling Past $1M/Mo on Meta Taught Me About the Algorithm

Lessons from scaling Meta ad spend past $1M/month — creative structure, algorithm behavior, attribution at scale, and what actually drives performance.

Jordan Glickman·May 10, 2026·8
Strategy

Everyone has opinions about the Meta algorithm. Most of them are wrong.

You'll find threads from media buyers convinced they cracked the code. Agency blog posts claiming proprietary frameworks. Courses built around rules that stopped working six months before the course launched. The discourse around Meta is louder than almost any other channel, and the signal-to-noise ratio is terrible.

What I can offer is not a theory. It's what I actually observed running accounts past $1M in monthly Meta spend across multiple DTC brands at Impremis. Some of what I learned confirmed conventional wisdom. Most of it did not.

Here's what the algorithm actually taught me when I had no choice but to listen.

Image brief: Vertical three-layer stack — Volume Concepts (60%), Angle Expansion (30%), Format Experiments (10%) — each with a function annotation. alt: "Three-layer creative stack at scale." caption: "60/30/10 — the ratio that keeps the algorithm fed without chaos."

The algorithm is a mirror, not a machine

The single most important mindset shift at scale: the Meta algorithm doesn't have preferences. It has feedback loops.

At low spend, the algorithm is doing exploratory work — testing combinations of your creative, your audience signals, your landing page, and your offer against a broad set of people to find the subset most likely to convert. You're paying for its education.

At high spend, that exploration narrows. The algorithm has enough data to pattern-match aggressively. It finds the cohort that looks like your best customers and delivers your ads to them. It's efficient in a way that feels almost magical until you hit the ceiling.

That ceiling is not a platform limitation. It's a reflection of your offer and your creative. If you've been scaling into the same audience signals with the same creative concepts, the algorithm has found everyone it can find who fits that pattern. Growth stalls not because Meta is broken, but because you've exhausted the mirror.

The implication is significant. To scale past $1M/mo sustainably, you cannot just increase budgets. You have to expand the pattern the algorithm is learning from. New creative angles. New audience entry points. New offers that open up different cohorts.

What actually breaks at scale

Before getting into what works, it's worth being precise about what fails.

Creative fatigue is faster than you think

At $50K/mo, you might run a winning creative for 90 days before frequency catches up. At $500K/mo, that same creative might saturate in three weeks. The math is simple: you're reaching more people faster, so the audience that has seen your ad multiple times grows much more quickly.

Most brands aren't producing creative at the velocity required to sustain high-spend accounts. They build a winning ad, scale it, and scramble when performance drops. By the time a new creative is approved and in-market, there's a performance gap almost impossible to recover cleanly. (More on the diagnosis here.)

The brands that scale past 7 figures in monthly spend have creative pipelines, not creative projects. New concepts always being tested. Winning angles being iterated, not just replicated. The machine never stops producing.

Broad targeting becomes non-negotiable

One of the counterintuitive things that happens at scale is that narrow targeting becomes a liability.

At lower budgets, interest stacks and lookalike audiences feel like they provide control. You're reaching a specific segment, and results feel predictable. But as you scale, those narrow audiences exhaust faster, frequency rises more sharply, and the algorithm has less room to find efficiency.

At high spend, broad targeting — minimal audience restrictions, letting the algorithm do the work — consistently outperforms manually constructed audience sets. At scale, Meta's algorithm has enough purchase signal data from your account to self-optimize more effectively than any media buyer's targeting logic.

The job of the media buyer at scale is not audience construction. It's creative input, budget pacing, and anomaly detection.

Attribution becomes ambiguous

At $1M/mo, you're reaching a substantial portion of your addressable market. Organic, direct, and paid traffic start to overlap in ways that make platform attribution increasingly unreliable.

Meta will claim credit for conversions that were influenced by it but not caused by it. View-through attribution windows in particular inflate reported ROAS at scale because you're reaching so many people who would have converted anyway through branded search or direct traffic. (The full story.)

This is not a bug. It's how the platform is designed. If you're managing a $1M/mo account on platform ROAS alone, you're flying with a distorted instrument.

The answer is blended MER tracked at the business level, held alongside platform metrics as a sanity check rather than a replacement.

The creative framework that sustained scale

After running multiple accounts through the $500K → $1M monthly spend threshold, I developed a creative architecture that produces consistent testing throughput without chaos.

The three-layer creative stack

| Creative layer | Budget allocation | Goal | Success metric | |---|:---:|---|---| | Volume Concepts | 60% | Sustain performance baseline | CPA at or below target | | Angle Expansion | 30% | Open new audience cohorts | CPM relative to conversion rate | | Format Experiments | 10% | Find next-wave creative | Statistical significance of any positive signal |

  • Layer 1: Volume Concepts. Direct-response formats built on proven structures — problem-agitation-solution, before-and-after, testimonial-forward UGC, product demonstration. Not innovative, but reliable. They keep the algorithm fed with signals.
  • Layer 2: Angle Expansion. New entry points for the same product. Different customer problems addressed. Different emotional triggers. Different hooks that reframe the value proposition for a different cohort. These are the ads that expand the algorithm's pattern when scale plateaus.
  • Layer 3: Format Experiments. Higher-risk, higher-reward tests — new formats, native-style content that doesn't look like an ad, long-form video, creator-led content that breaks category conventions. Most fail. The ones that work often become the next wave of Layer 1 creative.

The ratio matters. Teams that spend too much on experimentation don't generate enough reliable signals. Teams that focus only on volume concepts eventually run out of road.

How account structure changes at scale

Below $100K/mo, campaign structure is largely a matter of preference. Above that, structure has real consequences for algorithm learning and budget efficiency.

Consolidate campaigns, not ad sets

One of the most impactful structural changes I made on high-spend accounts was reducing campaign count. Instead of running 8–12 campaigns with fragmented budgets, I moved to 3–5 campaigns with clear funnel purposes and concentrated spend.

The Meta algorithm needs data to optimize. A campaign spending $5,000/day generates optimization data much faster than ten campaigns each spending $500/day. Consolidation accelerates the learning phase and produces more stable performance faster.

Use Campaign Budget Optimization at scale

CBO is more powerful at high spend than most practitioners give it credit for. When the algorithm has enough budget to allocate dynamically across ad sets, it consistently finds efficiency that manual budget allocation misses.

The resistance to CBO usually comes from a desire for control. Control at scale is an illusion. The algorithm has more information about real-time auction dynamics than any media buyer does. Give it the budget authority and focus your energy on what you can actually control: creative quality and offer integrity.

Protect learning phase windows

At $1M/mo, a bad week is not an inconvenience. It's a significant revenue event. The instinct when performance dips is to make rapid changes — new budgets, new audiences, new creative swaps.

That instinct is almost always wrong.

Every significant change to a campaign resets the learning phase. If you're making changes during a performance dip, you're interrupting the algorithm's ability to find its way back to efficiency. The discipline to hold structure through short-term volatility is one of the hardest skills in high-scale media buying and one of the most valuable.

The rule I operate by: if performance has been off for fewer than five days and there's no clear creative or technical explanation, do not touch the structure. Let the algorithm work.

What this means for agency operations

Managing accounts at this spend level requires organizational infrastructure most agencies aren't built for.

The media buyer role changes entirely. At scale, you need someone who thinks in systems, interprets data without overreacting, and collaborates tightly with creative. The execution-focused buyer who's great at building campaigns from scratch is not necessarily the right person to manage a $1M/mo account. (How I hire for the role changed entirely as a result.)

The creative team needs to operate like a production studio. Briefing cycles need to be short. Feedback loops between performance data and creative direction need to be tight. The media buyer and the creative lead need to be in the same conversation, not working from a handoff document.

And leadership needs to be comfortable reporting on MER and blended performance, not just platform dashboards — because at this scale, the gap between what Meta reports and what is actually happening in the business is wide enough to create real strategic confusion.

FAQ

At what spend does broad targeting start outperforming interest-based? Around $50K/mo for most categories. Earlier if your creative is doing the targeting work. Later if you're in a tightly defined niche.

Do I really need to consolidate to 3–5 campaigns? At $1M/mo, almost always. Below $200K/mo, the consolidation pressure is weaker — sometimes 6–8 campaigns is fine.

How long should I leave a struggling campaign alone before intervening? Five days, assuming no creative or technical issue is identified. The discipline cost is real but it's cheaper than reset costs.

Is CBO always the right move? At scale, almost always. Below $50K/mo total budget, ABO can still produce slightly tighter outcomes for sophisticated operators. Above that, CBO wins on aggregate efficiency.

Closing

Individual skill matters less at this scale than organizational infrastructure.

A talented media buyer cannot compensate for a slow creative pipeline. A great creative team cannot compensate for a poor budget structure. Strong account architecture cannot compensate for a weak offer.

At $1M/mo, every gap in your system is expensive. And the Meta algorithm, for all its opacity, is an exceptionally clear diagnostic tool. It tells you exactly where the constraints are. The job is to listen carefully and fix the right thing.

Build the creative infrastructure first. Then let the algorithm do its job.

Keep reading

Pieces I've written on related topics that pair well with this one:

Subscribe to the newsletter

Get every post in your inbox.

New writing every two weeks. No fluff. Unsubscribe anytime.

Subscribe