← All writing

How to Audit a Media Buyer's Work Without Micromanaging Their Account

Most agency audits check ROAS and miss everything that matters. Here's the four-layer framework for auditing media buyer performance without micromanaging.

Jordan Glickman·May 10, 2026·10
Operations

The most expensive mistake agency leaders make is confusing activity with performance.

A media buyer who is constantly in the account, launching tests, adjusting budgets, and reshuffling creative looks busy. Busy and effective are not the same thing. The audit tells you which one you have.

The second most expensive mistake is auditing in a way that destroys judgment and autonomy. If every review session becomes a line-by-line interrogation of why a budget moved ten percent on a Tuesday, the best people will start waiting for permission instead of making decisions — which is exactly the bottleneck good hiring is supposed to eliminate.

A well-designed media buyer performance audit gives leadership full visibility into what is driving outcomes without reducing media buyers to button-pushers. This is the framework that makes that possible.

Image brief: Three-row audit cadence table — Review Frequency, What It Covers, Who Runs It, Time Required. Monthly row highlighted. alt: "Media buyer performance audit cadence: weekly, monthly, quarterly review structure." caption: "The cadence gives leadership full visibility while keeping buyers autonomous. They know what they'll be evaluated on and when — it's a structured accountability system, not a surprise inspection."

Why Most Audits Fail Before They Start

Most agency leaders audit the wrong things. They look at platform dashboards, compare ROAS to a target, and call it a review.

Platform-reported ROAS is not a performance metric. It is a platform's version of performance, shaped by attribution windows, algorithmic modeling, and competitive dynamics that have nothing to do with whether a media buyer is making good decisions.

A real media buyer performance audit separates three distinct questions:

  1. Is the account being managed with sound decision-making processes?
  2. Is the measurement framework capturing true performance versus platform-optimistic reporting?
  3. Is the creative and audience strategy positioned to scale or to plateau?

If the audit cannot answer all three, it is evaluating optics — not performance.

Solve the Attribution Problem First

Before evaluating a media buyer fairly, the measurement standard they are being held to needs to be defined and unified. This is non-negotiable.

If leadership holds a buyer accountable to Meta-reported CPA while the finance team measures success from GA4 last-click, the buyer will optimize for the wrong signal and both parties will be confused about why results feel disconnected from reality.

The Meta-to-GA4 gap is the most common source of this confusion. Meta's default seven-day click, one-day view attribution captures conversions that happened within a week of an ad click or a day of an ad view. Google Analytics last-click gives Meta zero credit for many of those same conversions because the final session before purchase came from organic search or direct traffic. Neither model is wrong. Both are incomplete. A buyer making decisions inside this reality needs a defined standard to navigate it — and leadership needs the same standard to evaluate them.

The first output of any audit process should be a measurement alignment document: the attribution standard the team operates from, the blended CAC calculation methodology, and the hierarchy of data sources when platform numbers conflict.

TikTok Shops adds a separate layer of complexity. When purchases happen inside TikTok's native checkout, Shopify integration completeness varies by setup configuration, and TikTok's view-through attribution is broader than most buyers realize. For dual-platform accounts, Meta and TikTok may both be claiming credit for overlapping customer journeys. Before evaluating a buyer's TikTok strategy, the measurement infrastructure needs to distinguish TikTok-driven revenue from revenue that would have been captured by Meta or email regardless. See why the three-signal attribution framework — platform ROAS, GA4, and MER — is the only approach that gives a defensible picture of true performance across platforms.

The Four-Layer Audit Framework

Layer 1: Decision Quality Review

This is the most important layer and the one most leaders skip entirely.

Paid media outcomes are partially influenced by decisions and partially by factors outside the buyer's control: auction dynamics, seasonality, creative performance, offer strength, and competitive shifts. Auditing only outcomes punishes buyers for market conditions and rewards them for tailwinds they did not create.

Decision quality review evaluates whether the buyer made the right call given the information available at the time — regardless of how the outcome turned out.

Questions to answer in this layer:

  • When CPAs started rising, what hypothesis did the buyer form and what action did they take?
  • When a creative unit was declining, did they catch it before significant performance impact or after?
  • When budget was increased, did they have a clear rationale for where incremental spend would go?
  • When a test failed, did they extract a learning or just move on?

These answers require a decision log — a brief weekly record of what changed, why it changed, and what the expected outcome was. Every media buyer should maintain one. The decision log is auditable retroactively without requiring leadership to reconstruct thinking weeks later from a campaign history that lacks context.

Layer 2: Measurement Integrity Check

The second layer confirms that what appears in reporting reflects business reality rather than platform-optimistic attribution.

| Metric | Platform View | Adjusted View | Gap | |---|---|---|---| | Purchases (Meta) | 220 | 160 (post-deduplication) | 27% inflation | | CPA (Meta) | $31 | $42 (blended CAC) | $11 misalignment | | ROAS | 3.8x | 2.6x (contribution margin ROAS) | Significant | | TikTok Purchases | 140 | 90 (after overlap audit) | 36% inflation |

Running this reconciliation monthly — as a standing part of the performance review rather than a crisis response — gives both leadership and the buyer a factual baseline that is not in dispute. It removes the emotional charge from performance conversations because the numbers are defined before the conversation starts.

Buyers who understand their measurement stack make better decisions. The audit process should develop that capability over time, not create a dynamic where buyers avoid discussing discrepancies they cannot explain.

Layer 3: Creative and Audience Strategy Review

A media buyer who is not thinking about creative performance is a traffic manager, not a strategist. This layer assesses whether the buyer has a point of view on creative and whether they use audience data to drive creative decisions rather than passively running whatever the creative team delivers.

Creative velocity and hypothesis quality. How many tests did the buyer initiate in the review period? Were the tests structured around specific hypotheses — this hook should outperform because behavioral data shows this benefit resonates with the highest-LTV cohort — or were they testing without a stated reason? See why each test should be tied to a documented hypothesis and why the learning output is what separates compounding creative intelligence from isolated results.

Hook and funnel alignment. Are creative formats and angles matched to the funnel stage? Running awareness-level UGC against a retargeting audience wastes spend. Running testimonial-heavy retargeting creative against a cold audience wastes a high-value asset. A buyer with genuine creative judgment catches these misalignments before they run for two weeks.

Creative decline detection. What is the buyer's process for identifying creative fatigue before it affects account performance? Frequency, CPM trends, and CTR decay are all leading indicators that appear before CPA impact. A buyer who reacts to CPA increases rather than catching the earlier signal has a process gap that the audit should surface. See the specific leading indicators of creative fatigue and the timelines at which they appear based on spend level.

Layer 4: Scaling Readiness Assessment

The final layer is forward-looking. If the buyer received 50 percent more budget tomorrow, what would happen?

This determines whether an account is positioned to scale or is currently at its ceiling.

Indicators of scaling readiness:

  • Proven creative concepts with room to expand spend before frequency becomes a constraint
  • Audience segments differentiated enough to absorb incremental budget without heavy overlap
  • Measurement infrastructure reliable enough to catch a scaling problem quickly if one emerges
  • A documented plan for where incremental budget would go and why — not "I'd increase budgets"

A buyer who can articulate the scaling plan and connect it to current performance data is an asset. A buyer who cannot answer this question clearly does not yet understand the account deeply enough to scale it safely.

The Audit Cadence

| Review Frequency | What It Covers | Who Runs It | Time Required | |---|---|---|---| | Weekly | Buyer async update: what changed, what was learned, what is planned next | Buyer → Leadership | 10 minutes | | Monthly | Layer 1 and Layer 2 audit: decision log review, measurement reconciliation | Leadership | 60 minutes | | Quarterly | Full four-layer audit including scaling readiness assessment | Leadership + Account Lead | 90 minutes |

This cadence provides full visibility without requiring constant account access. Media buyers know what they will be evaluated on and when — the audit is a structured accountability system, not a surprise inspection.

The weekly async update replaces the account review meeting that consumes everyone's time with minimal insight. The monthly structured review uses documentation the buyer already produces rather than reconstructing history from dashboards. The quarterly full audit is when decisions about account ownership, budget authority, and strategic direction are made with complete information.

What the Audit Reveals About Hiring

Running this framework consistently surfaces the distinction that ROAS numbers alone obscure: which buyers are genuinely performing versus which ones were benefiting from favorable conditions.

The buyers who maintain strong decision quality through difficult periods — who proactively reconcile their measurement, who hold a creative opinion backed by data — are the future senior strategists and account leads. The buyers who perform well only when creative is strong and seasonality is cooperative are execution-level operators. Valuable in a structured role, but not appropriate for independent ownership of important accounts.

The audit makes this visible over time. It also has a talent retention benefit: the best media buyers want to work in an environment where performance is assessed fairly, where their reasoning is visible, and where good decisions in difficult periods are recognized as distinct from bad decisions in favorable periods. An audit system that separates those things attracts and keeps that caliber of buyer.

The System Failure Distinction

Before making personnel decisions based on audit results, check whether the performance gap reflects a buyer problem or a system problem.

If the measurement infrastructure is unreliable, even a strong buyer will appear inconsistent. If the creative pipeline is slow, even a strong buyer will run assets past their effective lifespan. If the attribution standard is undefined, even a strong buyer will optimize for the wrong signal.

The four-layer audit surfaces both types of failure. Fix system failures before making personnel decisions — otherwise good buyers get evaluated against the consequences of structural problems they did not create and cannot fix. See how the agency playbook and operating cadence that enable consistent execution are the prerequisite for fair performance evaluation.

FAQ

How should this framework change for junior versus senior media buyers? Layer 1 decision quality expectations are calibrated differently. A junior buyer is evaluated on whether they followed the decision stack correctly and escalated Tier 2 decisions appropriately. A senior buyer is evaluated on the quality of their Tier 2 and Tier 3 judgment — the depth of their hypothesis formation and the rigor of their post-test analysis. The measurement and scaling readiness layers apply equally at both levels, because every buyer who touches an account should understand the measurement environment they are operating in.

What if a media buyer resists maintaining a decision log? A buyer who resists documenting decisions is telling you they are not confident in the quality of those decisions — or they have not internalized the difference between explaining a decision and defending it. The log is not punitive. It is the primary tool that allows a buyer to demonstrate their thinking over time rather than being evaluated solely on outcomes they only partially control. Frame it that way. If the resistance persists, it is a signal worth taking seriously.

How does this framework scale when one account lead is managing multiple media buyers? The weekly async update aggregates easily across buyers. The monthly review becomes a 30-minute meeting per buyer using the decision log and measurement reconciliation as the agenda. The quarterly full audit is where account lead time investment is concentrated. Three buyers at this cadence requires roughly four to five hours per month of structured review time — comparable to the time currently spent in unstructured check-ins that produce less useful output.

Closing

The media buyer performance audit works because it is built before problems emerge, not in response to them.

The decision log exists before you need it. The measurement reconciliation runs before a discrepancy becomes a client conversation. The scaling readiness assessment happens before the client asks to double their budget in a competitive quarter.

Audit the system the buyer operates within as much as the buyer themselves. Define the measurement standard. Build the documentation cadence. Run the four-layer review consistently.

The quality of the media buying operation will compound over time — visible in client retention, visible in account performance through difficult periods, and visible in the caliber of decisions being made by a team that understands they are being evaluated on judgment, not just outcomes.

Build the framework now. Run it every time.

Keep reading

Pieces I've written on related topics that pair well with this one:

Subscribe to the newsletter

Get every post in your inbox.

New writing every two weeks. No fluff. Unsubscribe anytime.

Subscribe