← All writing

Why Your Agency's Reporting Is Making Clients Nervous (And How to Fix It)

Most agency-client relationships end over reporting, not performance. Build reports around blended ROAS and MER that create confidence instead of anxiety.

Jordan Glickman·May 10, 2026·10
Operations

Clients rarely leave agencies because performance was bad.

They leave because they could not tell whether performance was good or bad — and nobody on the agency side helped them figure it out.

That distinction matters more than most agencies acknowledge. Strong media buying does not save a relationship where the reporting is creating weekly anxiety. The account can be profitable, the creative iteration system can be working, and the trajectory can be genuinely positive — and none of it matters if the client spends every Monday second-guessing whether the numbers they received mean anything.

Agency performance reporting is a trust infrastructure problem. When it breaks down, it erodes relationships that the underlying work would otherwise preserve. The fix is structural, not cosmetic.

Image brief: Four-row reporting metrics trust table — Metric, Source, What It Measures, Client Trust Level. MER row highlighted: "No attribution model required — total revenue divided by total marketing spend." alt: "Agency reporting metrics by client trust level — Meta ROAS, GA4, Blended ROAS, MER." caption: "Platform-reported ROAS creates anxiety because clients can't reconcile it. Blended ROAS and MER create confidence because they don't require an attribution explanation to make sense."

Why Platform Numbers Undermine Client Trust

The specific mechanism that destroys client confidence happens predictably, and it is almost entirely preventable.

A client receives a Meta Ads Manager screenshot showing 180 purchases last month. They log into their Shopify backend and see 140 orders. They open Google Analytics and see 95 conversions attributed to paid social. Three different numbers representing the same period. No explanation in the report for why they diverge.

What the client thinks: the agency is showing them the most favorable number. What is actually happening: three measurement systems with different attribution models are counting the same purchases differently.

Meta's default attribution credits conversions to any user who clicked an ad within seven days or viewed one within one day before purchasing. Google Analytics uses last-click attribution, which credits only the final session touchpoint. Shopify's backend counts actual order events — no attribution model, no view-through logic, no multi-touch weighting. A customer who saw a Meta ad on Tuesday, received an email Friday, clicked it Saturday, and purchased — that single purchase shows up differently in each system.

None of these systems is wrong. They measure different things. But when clients receive platform-reported ROAS without any contextualizing layer, the divergence they discover on their own reads as evidence they are being managed rather than informed.

See why this divergence between Meta and GA4 is structural and predictable rather than a sign that attribution is broken — understanding the normal gap between the two systems is what allows an agency to explain it clearly instead of hoping clients don't notice.

The Four-Metric Reporting Foundation

Agency performance reporting that retains clients leads with metrics the client can independently verify before presenting metrics that require attribution context to interpret.

| Metric | Source | What It Measures | Client Trust Level | |---|---|---|---| | Meta Reported ROAS | Ads Manager | Platform-attributed conversions (7-day click / 1-day view) | Low without context | | GA4 Paid Social Revenue | Google Analytics 4 | Last-click attributed revenue from paid social sessions | Medium | | Blended ROAS | Backend (Shopify / revenue source) | Total revenue divided by total ad spend | High | | MER | Backend + total spend | Total revenue divided by all marketing spend | Very high |

Blended ROAS and MER are the headline metrics. Everything else is supporting context for optimization decisions. MER — total revenue divided by total marketing spend across every channel — is the single highest-trust metric because it requires no attribution model and cannot be distorted by platform reporting window changes. Clients who understand their MER trend develop confidence in the relationship even during periods when platform-reported numbers are fluctuating.

See how the post-iOS 14 environment made MER the primary account health metric — and why platform-reported ROAS as a standalone number has been structurally unreliable since 2021 — the case for leading with blended metrics is not just philosophical, it reflects the actual state of cross-channel attribution.

The Three Reporting Failures That Accelerate Churn

Failure One: Reporting Platform Numbers as the Source of Truth

Sending a client a report built around Meta ROAS without a blended layer is establishing a foundation that will eventually collapse. At some point the client will look at their bank account, their Shopify revenue, or their GA4 dashboard, and the numbers will not reconcile with what the agency presented. That moment is not the beginning of a conversation — it is the beginning of a case building quietly in the client's head.

Every subsequent report becomes evidence under re-examination. The question shifts from "how did we perform this month?" to "what have they been telling me all along?"

The fix is straightforward: blended ROAS and MER in the headline every month, with platform-reported numbers presented as directional inputs that inform optimization rather than the primary performance verdict.

Failure Two: Data Without Narrative

A report that shows last month's numbers without explaining what they mean is not agency performance reporting. It is a spreadsheet formatted to look like one.

If CPM increased 22 percent month over month, the report should say why — competitive auction pressure, audience saturation approaching, seasonal cost floor rising — and what the response is: creative refresh in production, audience expansion test launching this week, bid cap adjustment implemented on Friday.

The narrative layer is where the agency's expertise becomes visible. Clients who understand the cause-and-effect relationship between what happened in the account and what the team is doing about it are clients who renew. Clients who receive numbers without interpretation start asking around.

Failure Three: Lighter Reports During Bad Periods

This failure accelerates relationship deterioration faster than any other. Every account has down months. Creative fatigues. Seasonality creates CPM spikes. iOS signal loss compresses targeting efficiency. These are real, explainable phenomena that clients understand when explained clearly.

What they do not understand — and what destroys trust permanently — is receiving a report that seems thinner than usual, with notable metrics buried in appendix tables or removed from the summary entirely.

The correct approach is the inverse: when performance is down, the report should be more detailed, not less. State clearly what declined, identify the most likely root cause based on available data, specify the actions being taken, and provide a timeline for expected impact. See the five-stage funnel audit that isolates whether a performance decline is attribution, creative, landing page, or auction-related — the diagnosis framework in that post is exactly what the narrative layer of a down-period report should reflect.

Clients are substantially more forgiving of bad performance than they are of the feeling that the agency is managing their perception rather than informing their understanding.

The Structure of a Trust-Building Report

Strong agency performance reporting has consistent architecture across every period — regardless of how performance looks.

Executive Summary. One page, no jargon. Blended ROAS, MER, total revenue, total ad spend, and the single most important thing that happened this period. This is the section the founder or CMO of the client business needs to read. Two minutes. No charts. Just the number that matters and the sentence that explains it.

Platform Performance Overview. Each active channel shows spend, reported conversions, platform ROAS, and a rolling trend line. This section includes one paragraph explaining any material divergence between platform-reported numbers and blended performance. It is not an apology section — it is a measurement transparency section. The explanation should appear even in periods when the divergence is expected and normal.

Creative Performance Breakdown. Top-performing assets ranked by format-appropriate KPIs. Video assets show hook rate, hold rate, CTR, and conversion rate. Static assets show CTR and conversion rate. This section makes the creative process visible and builds client trust in the creative iteration system — not just the media buying decisions.

Testing Log. Every test that ran this period: the hypothesis, the result, and the conclusion. This section accumulates into one of the clearest arguments for retaining an agency over time. You cannot replicate two years of documented test outcomes by switching agencies. The testing log is evidence of compound creative intelligence.

Forward Plan. Tests scheduled for next period, creative in production, audience decisions in progress. Agencies that communicate strategic intent — not just historical results — create the impression of being ahead of the work rather than behind it.

Tracking and Attribution Notes. A standing section that flags pixel issues, attribution window changes, platform reporting anomalies, or tracking events that behaved unexpectedly this period. This section prevents the "our numbers don't match your numbers" conversation from becoming a trust crisis.

Ownership Structure Behind Consistent Reporting

Strong reporting does not happen because one person is motivated enough to do it well at 9 pm the night before the client call. It happens because the reporting function has clear ownership and the work is distributed.

The account lead owns the narrative layer: executive summary, test conclusions, forward plan, and the framing of any difficult results. They own what the report says and why it says it.

The media buyer owns the platform data layer: pulling numbers from each channel, flagging anomalies, confirming attribution settings match what was active during the reporting period, and providing the raw performance data the account lead translates into narrative.

An analytics function — dedicated or shared — owns blended ROAS calculation, MER construction, pixel and Conversions API integrity checks, and the reconciliation between platform-reported data and backend revenue. This function is what makes the four-metric foundation above possible. Without it, the headline metrics revert to platform self-reporting by default.

When these three layers are owned separately and assembled before the client call, the report is coherent and defensible. When one person is responsible for all three, something gets cut — and it is usually the layer the client trusts most.

Reporting as a New Business Tool

The reporting structure described here is also one of the more effective conversion assets an agency can deploy during a pitch.

Prospective clients are making one primary evaluation in the sales process: will I be able to trust what this agency tells me? They cannot audit the media buying until they hire you. They cannot evaluate the creative strategy until they see it in production. But they can look at a sample report during the proposal stage — right now.

Showing a prospective client a redacted sample report with blended ROAS methodology explained, attribution complexity acknowledged, and a visible testing log closes more deals than any deck about team credentials or proprietary process. It shows rather than tells. In performance marketing, showing is always more persuasive than telling.

FAQ

What if a client pushes back on MER because organic traffic inflates it? Acknowledge the limitation explicitly and address it in the tracking notes section. When a PR mention or organic spike materially improved MER without paid contributing, document it in the report and calculate an adjusted MER that isolates the spike. Clients who see the agency identifying and correcting for favorable anomalies trust the numbers more, not less.

How often should blended ROAS and MER be reviewed with clients? Monthly is the minimum. For accounts spending above $30,000 per month across channels, weekly MER tracking should be available in a live dashboard the client can access between reports. The report is the monthly synthesis — the dashboard is the operational visibility layer.

Should the report structure change based on client sophistication? The data does not change. The narrative depth does. A founder running their first DTC brand needs more explanation of what blended ROAS means and why it differs from what Meta reports. A seasoned CMO needs less. The executive summary section is where this calibration matters most — lead with what is most legible for the specific reader, not with what is most comprehensive.

Closing

Fix the report before you fix the account.

Most agency relationships that are fraying over performance are actually fraying over reporting. The account may be executing well on a blended basis — and the client has no way to know because the reporting is not giving them the tools to see it.

Build blended ROAS and MER into every headline. Explain platform attribution divergence every time, not just when a client asks. Make the testing system visible through documentation. Assign clear ownership to each layer of the reporting function.

The report is the only product the client holds in their hands every month. The media buying happens inside platforms they do not log into. The creative strategy runs in workflows they are not part of. The attribution analysis happens in tools they do not access.

Treat the report like the product it is. When it is done well, clients stop being nervous and start being advocates — and that shift is worth more to retention than any campaign optimization you will ever run inside the account.

Keep reading

Pieces I've written on related topics that pair well with this one:

Subscribe to the newsletter

Get every post in your inbox.

New writing every two weeks. No fluff. Unsubscribe anytime.

Subscribe