← All writing

How to Build a Paid Media Playbook That a New Hire Can Execute Without You

Build a paid media playbook your team can run without you — KPI systems, attribution across platforms, and creative workflows that actually scale.

Jordan Glickman·May 10, 2026·10
Operations

The moment you realize your agency cannot run without you is usually the moment you are too busy to fix it.

Every decision flows through you. Every budget call needs sign-off. Every client escalation lands in your inbox. The operational pattern is not an agency — it is a one-person practice with a larger payroll attached.

The fix is not hiring more people. It is building a paid media playbook tight enough that a capable new hire can execute at a high level by week three. Not week twelve. Not after sitting in every call. Week three.

This is what that actually requires.

Image brief: Three-row agency decision stack table — Decision Tier, Who Decides, Examples. Tier 1 highlighted. alt: "Agency paid media decision stack framework — three tiers from new hire to CEO." caption: "The decision stack is what turns a document into a system. When scope is clear, new hires stop waiting on small decisions and stop overreaching on large ones."

Why Most Agency Playbooks Fail Before They Start

Most agency playbooks are either 40-page Google Docs nobody reads or knowledge that lives entirely in the founder's head. Neither scales.

A real paid media playbook for agency operations has to do three things: define what good looks like, clarify who makes which decisions, and give a new hire enough signal to move without asking. If it does not accomplish all three, you remain the bottleneck.

Step 1: Build the Decision Stack Before Writing Anything Else

Before documenting a single tactic, establish ownership structure.

| Decision Tier | Who Decides | Examples | |---|---|---| | Tier 1 — New hire decides alone | New hire | Creative swaps, bid adjustments under 20%, pausing underperformers against preset thresholds | | Tier 2 — New hire flags, senior approves | Senior lead | Budget shifts over 20%, launching new campaigns, changing campaign objectives, account structure changes | | Tier 3 — Account lead or CEO only | Account lead / CEO | Offer changes, strategic pivots, client-facing budget recommendations, adding a new channel |

Document this decision stack before anything else. When a new hire knows exactly where their authority ends, they stop waiting for permission on small decisions and stop overreaching on decisions that require pattern recognition they do not yet have.

The temptation is to keep Tier 1 narrow to protect against mistakes. The cost of that choice is a team that cannot move without constant supervision. The playbook works when Tier 1 is generous, Tier 2 is explicit about what triggers escalation, and Tier 3 is genuinely reserved for judgment calls that require account-level context.

Step 2: Build the KPI Framework Into the Playbook, Not the Reporting Deck

One of the most common operational errors in agency scaling is keeping the KPI framework inside the client reporting document rather than the internal playbook.

A new hire should not need to open a client deck to know what they are optimizing for. It should be in the playbook with decision triggers attached.

| Metric | Frequency | Owner | Decision Trigger | |---|---|---|---| | Blended MER | Weekly | Account Lead | If below target, flag creative and offer before touching structure | | Platform ROAS (Meta, Google) | Daily | Media Buyer | Reference only — not standalone decision input | | CAC by channel | Weekly | Media Buyer | If 20% above target, reduce budget on that channel | | Creative CTR and hook rate | Weekly | Creative Strategist | If CTR below threshold, pull and replace | | 30-day new customer payback | Monthly | Account Lead | Primary health metric for scaling decisions |

The key column is "Decision Trigger." That is what converts a reporting metric into a system. Without it, the metric is a number. With it, the metric is an instruction.

Platform ROAS is listed as reference only, not a decision input. This is intentional. Meta and GA4 will almost never agree on conversion numbers. A new hire making budget decisions based on platform-reported ROAS without triangulation against blended MER is making decisions on a number that systematically overstates performance. Document the expected discrepancy between Meta ROAS and blended MER for each client account, and write it in the playbook. See why the divergence between Meta Ads Manager and GA4 is structural rather than a tracking error — and how to set client expectations around it proactively.

Step 3: Solve Attribution Documentation Before the Handoff

Attribution is where most paid media playbooks break down. A new hire who does not understand the measurement gap between Meta and GA4 will make consistently wrong decisions — and will not know they are wrong.

The playbook needs to document three things about attribution for every client account:

The expected Meta-to-MER gap. If a client's Meta ROAS typically reads 2.8x but blended MER sits at 1.9x, document that explicitly. A new hire who sees a 1.9x MER and panics is as problematic as one who sees a 2.8x Meta ROAS and scales prematurely.

TikTok's attribution behavior. TikTok's default attribution window matches Meta's on paper, but buyer behavior differs significantly. TikTok drives view-through conversions at rates that inflate platform-reported ROAS, and in-app TikTok Shops purchases use a separate attribution model from off-platform conversions. The playbook should explicitly state that TikTok performance is evaluated on incremental contribution and blended MER, not on platform-reported ROAS. See why holdout testing is the methodology that validly measures incremental contribution for TikTok and Meta campaigns — and when it becomes necessary.

The three-signal measurement framework. Every account should be evaluated on: platform-reported ROAS (directional reference), blended MER (primary health signal), and new customer payback period (scaling decision input). Tell the new hire: if all three are trending in the same direction, trust the signal. If they are diverging, escalate before changing anything.

Step 4: Document the Creative System

Creative strategy is the most under-documented component of most paid media playbooks. It is also the component most directly responsible for performance variance.

The three-layer testing hierarchy:

Layer 1 — Hook testing. Same creative body, same offer, different opening three to five seconds. Run four to six hook variations per concept. The hook is the highest-leverage creative variable at any spend level. Do not test other elements until the hook layer has a clear winner.

Layer 2 — Format testing. Once a hook performs, test the same hook in multiple formats: UGC talking head, static image, lifestyle B-roll with voiceover. Do not assume the format that won in one account transfers to another.

Layer 3 — Offer testing. When a format wins, test the offer framing — the same product positioned as a discount versus a bundle versus a guarantee produces materially different results for different audience segments. This is where the largest performance leverage is found, and it requires the prior two layers to be resolved first.

See how the creative brief is where this hierarchy is operationalized — and why brief quality is the upstream constraint on what any testing system can produce.

UGC brief standards. Document the brief format explicitly in the playbook: exact hook language for the first three seconds, one product feature per video, specific CTA wording, visual direction, and what to exclude. A new hire with a good brief template can manage the full UGC operation without creative review at every stage. Without brief standards, UGC output varies with whoever is writing the brief that week.

Step 5: Channel-Specific Execution Rules

Different channels have different optimization logic. The playbook needs to document channel-specific rules so a new hire is not applying Meta logic to Google or TikTok logic to Meta.

Meta Ads:

  • Advantage Plus Shopping Campaigns as the primary scaling vehicle for eCommerce above the learning phase threshold
  • Budget changes capped at 20 percent per day — let the learning phase stabilize before adding spend
  • Creative refresh trigger: when frequency on winning ads reaches 2.5 on cold audiences, add variants before performance declines
  • Attribution window for decisions: seven-day click only; remove view-through conversions from the primary evaluation to reduce algorithm credit inflation

Google Ads:

  • Performance Max as the default for eCommerce; avoid over-segmentation that fragments conversion signal
  • Brand campaigns run separately, always — never allow branded keywords to compete with non-brand campaigns for budget
  • Negative keyword maintenance is a weekly task, not a monthly one; falling behind produces irreversible spend waste
  • Cross-reference data-driven attribution in Google against blended MER; last-click produces systematically misleading conclusions for multi-touch journeys

TikTok:

  • Evaluate on incremental contribution and blended MER, not platform ROAS — TikTok self-attribution is more inflated than Meta at comparable spend levels
  • Optimize for purchase conversion events at the campaign level; add-to-cart and initiate-checkout objectives waste budget on non-buyers
  • Creative refresh cadence is roughly twice as frequent as Meta — TikTok creative fatigues faster due to higher content consumption rates and rapidly shifting aesthetic norms
  • TikTok Shops in-app purchase attribution is claimed by the platform even when discovery happened elsewhere; document the expected overclaim rate and do not make allocation decisions from TikTok dashboard numbers alone

Step 6: Define the Hiring Structure That Makes the Playbook Executable

A playbook is only as useful as the team it is written for. The organizational structure needs to define clear lanes that do not collapse into each other.

Media buyer owns platform execution. Accountable to CAC targets and spend pacing. Does not make creative decisions.

Creative strategist owns the brief, the testing hierarchy, and creative performance analysis. Does not touch campaign structure or bidding strategy. See how the creative velocity benchmark at each spend tier determines how many briefs the creative strategist needs to maintain in the production queue at any given time.

Data analyst (dedicated or fractional) owns the measurement layer: MER reporting, payback period tracking, attribution reconciliation across platforms. Does not make media allocation decisions, but flags when signals diverge and escalates to the account lead before anyone changes anything.

Account lead sits across all three functions. Escalates Tier 3 decisions only. Their primary job is synthesizing the signal from all three roles into clear recommendations and handling client-facing communication.

This structure means a new media buyer is accountable in a defined lane. They are not also evaluating creative or reconciling attribution discrepancies. Both of those are documented functions belonging to different roles.

Step 7: Build the Weekly Operating Rhythm

Systems without rhythm decay. The operating cadence needs to be in the playbook, not in someone's calendar.

Monday: Pull seven-day performance data. Flag any metrics outside threshold. No budget changes today — Monday is diagnosis, not action.

Tuesday: Media buyer review. Make budget and creative swap decisions based on Monday's data with the decision stack as the decision boundary.

Wednesday: Creative strategist review. Brief new assets based on winning hooks. Kill underperformers. Update the production queue.

Friday: Account lead reviews MER and payback period. Prepares the client-facing summary. Escalates any Tier 3 items before the week closes.

When the operating rhythm is documented in the playbook, a new hire knows the structure of every week without asking. The rhythm creates accountability without requiring the founder to enforce it through presence. See how the reporting structure that emerges from this rhythm becomes the client retention mechanism — and why the weekly cadence determines whether clients trust what they receive.

FAQ

How long should the playbook actually be? Long enough to answer the questions a new hire will ask in their first four weeks, and no longer. The test is whether someone who has never worked at the agency could execute core functions correctly after reading it and asking two follow-up questions. If it requires more than two follow-up questions, the playbook has a gap. If it is longer than 30 pages, it probably has padding that is working against usability.

Should client-specific information be in the playbook or in a separate account brief? Separate. The playbook contains universal operating standards — decision stack, KPI framework, attribution documentation structure, creative testing hierarchy, channel rules, and operating rhythm. Client-specific information (expected MER gap, historical performance ranges, offer constraints, brand guidelines) lives in a client brief linked from the account in the project management system. The playbook tells the new hire how to think. The client brief tells them what to think about for a specific account.

When should the playbook be updated? After any Tier 3 decision that produces a learnable outcome. After any quarter where a performance pattern emerged that the playbook did not anticipate. After any hiring cycle where a new team member surfaced a gap that required repeated explanation rather than a playbook read. The playbook is a living document with a quarterly review cadence, not a static reference artifact.

Closing

Expertise is what earns the first clients. Process is what allows the agency to serve twenty of them.

A well-built paid media playbook does not make the founder irrelevant. It makes judgment available at the right moments rather than at all moments. The new hire executes. The senior lead reviews. The account lead and CEO show up for decisions that require pattern recognition that cannot be documented — because those decisions genuinely require experience, not just instructions.

Build the decision stack. Embed the KPI triggers. Document the attribution framework for every account. Write the creative testing hierarchy into the playbook itself. Define the channel rules. Draw the org chart. Set the weekly rhythm.

Then step back and let the system run. That is when the agency stops being a bottleneck and starts being a scalable operation.

Keep reading

Pieces I've written on related topics that pair well with this one:

Subscribe to the newsletter

Get every post in your inbox.

New writing every two weeks. No fluff. Unsubscribe anytime.

Subscribe