AI Raises the Floor. Depth Is How You Win.
The narrative that AI replaces entry-level work misunderstands what entry-level people actually do. A junior in activation isn't doing busywork — they're configuring DV360 targeting, QA-ing tracking pixels, managing bid strategies. The real question is: when AI raises the floor for everyone, where does the advantage come from? Depth.
I've been building an AI-Native Media Operations course over the past few months. Seven modules. Dozens of slides. And the one slide I keep rewriting — the one I still don't think I've gotten right — is about what happens to the people entering the industry when the operating model shifts to 75-80% AI.
I keep writing versions that sound confident. Then I delete them, because I'm not confident. I have directions, not answers. And I think the honest version of this conversation is more useful than the polished one.
So here's what I've been sitting with.
The Narrative Is Wrong
You've heard it: "AI replaces entry-level work." It's a clean story. It's also wrong — or at least, it misunderstands what entry-level people actually do.
Walk through the disciplines in a modern media agency:
- Strategy: Pulling competitive analysis, synthesizing research briefs, spotting patterns in data that seniors miss because they're in meetings all day
- Planning: Building media plans, running budget scenarios, constructing audience segments — often closer to the actual data than the senior planner reviewing their work
- Activation: Configuring DV360 targeting, QA-ing tracking pixels, managing bid strategies across platforms — genuinely technical, high-stakes work where a misconfigured audience can burn through budget in hours
- Ad Ops: Trafficking ads, debugging tracking discrepancies, maintaining measurement integrity across dozens of platforms
- Research: Evaluating survey methodology, catching sample bias, coding qualitative responses — the kind of careful analytical work that requires genuine skepticism
- Reporting: Building dashboards, identifying anomalies, knowing when the data doesn't add up even though the charts look fine
These aren't "repetitive tasks." They're substantive contributions that require judgment, platform knowledge, and client context. The person configuring a DV360 campaign isn't doing busywork — they're making dozens of technical decisions that directly affect whether the media plan actually delivers.
The Senior Validation Gap Nobody Talks About
Here's something I don't hear discussed enough: your VP hasn't been inside DV360 daily in years. Your planning director doesn't build audience segments by hand anymore. The people making strategic decisions have often delegated the platform-level execution for so long that they couldn't validate AI output at that layer even if they wanted to.
When AI generates a campaign setup, who validates it's correct? When it builds an audience segment, who checks whether the data sources are right? When it produces a measurement framework, who knows whether the tracking architecture actually supports it?
Often it's the people closest to the platforms. The same people being told their work is "routine."
I think this is the gap that makes the "AI replaces junior work" narrative dangerous. The 75-80% that AI handles still needs validation. That validation requires depth — platform expertise, tracking architecture knowledge, data source familiarity. And in many organizations, that depth lives with the people we're casually suggesting will be displaced.
The Parade Problem
I keep coming back to this analogy. When everyone has AI, broad capability becomes a parade — impressive from a distance, identical up close. Every agency can generate media plans, audience insights, competitive reports, creative briefs at scale. The tools are the same. The prompts converge. The output normalizes.
So where does the advantage come from?
Depth. Going deeper than AI + competitors in specific disciplines. Not broader — deeper.
This is counterintuitive if you grew up in an industry that valued "T-shaped" generalists. But I think the shape is changing. When AI provides the horizontal bar of the T for free, the only differentiator is how far down the vertical bar goes.
Depth-First Career Development
The old model was: start broad, specialize later. You'd rotate through departments, get exposure to planning and buying and reporting, then eventually find your lane.
I think the better model now is the reverse: go deep first, then broaden.
AI already provides breadth. Any junior can use AI to draft a media plan, build a competitive analysis, or generate a research summary. That's the floor — it's been raised for everyone. What's scarce is the person who knows activation or measurement or creative evaluation better than the AI does. The person who looks at AI output and immediately sees what's wrong.
That evaluation skill — the ability to assess AI work with genuine expertise — requires depth. And depth requires focused time in a discipline, not a rotation through five departments in your first two years.
What "Going Deeper" Actually Looks Like
This is where I want to be specific, because generic career advice is useless.
Activation: Become the platform-AI bridge. Know the platform capabilities and limitations well enough to spot when AI's configurations won't work in reality — the audience that's too narrow to deliver, the bid strategy that doesn't suit the objective, the placement list that includes inventory the client explicitly excluded.
Ad Ops: Shift from tag implementation to tracking architecture. Don't just place the pixels — design the measurement infrastructure that AI depends on. Understand consent frameworks, server-side tagging, data clean rooms. The person who can architect measurement systems is not being displaced by AI. They're becoming more important.
Planning: Learn to stress-test, not just build. Anyone can build a plan now. The value is knowing when the math works but the strategy doesn't — when the reach curve looks efficient but the frequency will annoy the audience, when the channel mix is optimized on paper but ignores how the brand actually shows up in each environment.
Research: Develop skepticism as a core skill. AI can synthesize research faster than any human. But it can also confidently present findings from a poorly designed survey, conflate correlation with causation, and miss sample bias. The researcher who spots methodology flaws is more valuable than ever.
Creative: Build the aesthetic judgment that AI lacks. AI can generate variants. It cannot tell you why this particular variant works for this particular brand in this particular context. That judgment — informed by taste, brand knowledge, and cultural awareness — is developable but not automatable.
Reporting: Be the data integrity layer. AI builds beautiful dashboards. But dashboards can be beautiful and wrong. The person who knows when the attribution model is misleading, when the data source changed quietly, when the numbers look right but the story they're telling is backwards — that person is essential.
The Eval Layer Nobody's Talking About
There's a concept from AI development that I think maps directly onto this: evals. In AI, an eval is the ground truth — the criteria that define what "correct" looks like. Without evals, you can't tell whether AI output is good or bad. You're just trusting the machine.
In media operations, evals already exist. They're just not called that.
Your pre-launch checklist is an eval. It defines what a correct campaign setup looks like. Your KPI ladder is an eval. It defines what good performance means. Your brand guide is an eval. It defines what compliant creative looks like. Your tracking accuracy standard is an eval. It defines what reliable measurement means.
The people who build and maintain these — who encode expert judgment into operational criteria — are doing something AI fundamentally cannot do for itself. AI can generate a campaign setup. It cannot define what a correct campaign setup looks like for this client in this market with these constraints. That requires depth.
And here's what I think is underappreciated: building evals is one of the most powerful learning exercises available. When you ask someone to define what "correct" looks like for their discipline — to write the pre-launch checklist, to specify the acceptable discrepancy threshold, to build the creative compliance rubric — they have to understand the work deeply enough to encode judgment. That's not administrative work. That's accelerated depth development.
So when I talk about depth-first career development, eval creation is a concrete expression of it. The person who can both evaluate AI output and define the criteria it should be evaluated against has a skillset that compounds over time. The criteria get sharper. The AI gets better. And the expertise gap between that person and someone who just uses AI grows wider.
For People Entering the Industry
I want to be honest here, because I think people entering the industry deserve honesty more than reassurance.
Yes, entry-level roles are changing. The entry point is no longer "do the work AI can do, but with human hands." It's "develop the depth to evaluate whether AI did the work correctly."
That sounds like a higher bar, and in some ways it is. But I think the evaluation skill — looking at AI output and knowing what's right and wrong and being able to articulate why — is developable faster than people assume. You're not starting from zero. You're starting with AI as a learning accelerator.
The catch is that you still need hands-on reps alongside evaluation. You need to build campaigns yourself to know what bad looks like. You need to pull data manually to understand what the dashboard is hiding. AI accelerates the learning, but it doesn't replace the doing entirely. Not yet.
Pick a discipline. Go deep. Learn the frameworks. The people who will thrive are the ones who develop genuine expertise in a specific area — not the ones who become generalist prompt engineers.
The Apprenticeship Problem
I have to admit — this is the part I haven't solved.
The traditional apprenticeship model in agencies worked because junior people learned by doing the work. The planning assistant built plans and learned planning. The activation coordinator set up campaigns and learned activation. The repetitions were the education.
AI compresses those workflows. And in compressing the workflows, it also compresses the learning mechanism. If AI builds the media plan and the junior reviews it, do they learn planning the same way? I'm not sure they do.
I have directions but not a complete answer. Depth-first development. Evaluation alongside execution. Using AI as a teaching tool, not just a production tool — having juniors build things with AI and then critique what it produced, so they learn both the skill and the judgment simultaneously.
But I'm not sure that's sufficient. The apprenticeship problem might be the hardest organizational challenge of the AI transition — harder than the technology, harder than the business model. If someone figures this out fully, they'll have solved something bigger than any single agency's operating model.
Where This Leaves Us
I'm not going to end this with a neat takeaway, because the honest version doesn't have one.
Here's what I think is true: you are not being replaced by AI. The narrative is more nuanced than that. But the way you develop, the skills you prioritize, and how you position your expertise — those need to evolve. Breadth is now free. Depth is the differentiator.
If you're early in your career: pick a discipline, go deep, and develop the judgment to evaluate AI work. That combination — depth plus evaluation — is what makes you irreplaceable.
If you're leading teams: the people closest to your platforms and data may be more important to your AI strategy than you realize. Make sure the people designing your organization's operating model understand that.
And if you're building a course about all of this and still rewriting that one slide — well, at least now you have a blog post to point to. Even if it doesn't have all the answers either.
That's it from me. I'd genuinely like to hear how others are thinking about this — especially people in the early years of their media careers. Do you agree? Disagree? What am I missing?
Cheers, Chandler





