Article

The Algorithmic Snowball: How AI-Powered LLMs and MMMs are Erasing Broadcast Media From Ad Budgets

AI ad dollars going away from broadcast

At Futuri, we recently completed a deep-dive study that exposed a hard truth: AI is reshaping media buying decisions in ways that systematically disadvantage broadcasters, both radio and television, at every level (network, national, regional, and local).

We’re not being seen. And if AI can’t see us, advertisers won’t either.

The Wake-Up Call: Broadcast Revenue Down as AI Use Surges

Some markets are reporting broadcast revenue declines of 35% year-to-date. And when we traced the root cause by talking to planners, agency buyers, and broadcast clients, we learned more about how AI tools like ChatGPT, Gemini, Grok, Claude, Perplexity, and others are used in media mix modeling (MMM) and media planning.

Agencies are using MMM systems that technically just call AI tools like ChatGPT on the back-end. These LLMs are now embedded in the tools planners rely on (like Omnicom’s Omni Assist), and there are also AI-native tools: Google Meridian, PerformanceMax, and others. They’re fast, data-driven, and increasingly making the decisions.

But they rarely recommend broadcast. In some cases, at all.

The Research: 20,000 AI-Generated Media Plans; Broadcast Barely Registers

Futuri asked 8 LLMs (GPT 4o, GPT 3.5, Gemini Pro 1.5, Gemini Flash 1.5, Grok-1, Grok 1.5, Claude Sonnet, and Perplexity) to give us:

  • 20,000 different media mixes
  • for many different types of businesses (local, regional, national)
  • in 60 business categories (retail, political, finance, QSR, health, etc)
  • with budget ranges between $25K-$10M.

The results:

  1. Radio appeared in just 3% of plans, with a tiny 3% to 5% budget share.
  2. Linear TV fared slightly better, at around 7%, but was dwarfed by digital video (YouTube/CTV/OTT), which appeared in 80-99% of plans and received an average of approximately 19% of the budget share.
  3. In most cases, AI simply acted as if local broadcast didn’t exist.

We didn’t expect to see such consistent exclusion of broadcast media, but the results were frankly sobering.

This is the start of an algorithmic snowball:

  • LLMs are trained on content that’s mostly digital, therefore, they recommend digital.
  • Buyers follow that advice, overlooking the broadcast medium entirely.
  • Fewer campaigns (or any) run on broadcast.
  • Even less data about broadcast effectiveness or ROI exists.
  • The next generation of AI doubles down on ignoring us.

Here’s a real-world example:

Let’s say you’re a media buyer allocating a $150K budget for Andersen Windows in Phoenix, targeting high-income women homeowners.

We asked ChatGPT to create a plan without prompting it on any partiality on the media mix to use. Here’s what it recommended:

  • $45K to programmatic digital display
  • $30K to paid social
  • $30K to CTV/OTT
  • $15K to streaming audio
  • $15K to SEM
  • $15K to direct mail
  • Radio and TV: $0. Not even a mention.

We all know broadcast can deliver real results. Local TV news, OTA, station websites, streaming, podcasts, and even talent endorsements, all align with this campaign’s goals. But the LLM didn’t recommend any of it.

*Note: If you try this campaign exercise on your own, remember that the LLM has history of its interactions with you that will influence its answer. Since you have likely used keywords related to media, radio, TV… it will be more likely to give you an answer that includes radio or TV. But this is NOT what most users will see.

Why the AI Doesn’t Frequently Recommend Radio and TV

LLMs learn from public content, including articles, case studies, and published research. And while big tech platforms flood the internet with measurable, real-world results, broadcast hasn’t kept up.

Let’s compare:

  • Google: Thousands of indexed case studies.
  • Meta: Granular ad performance dashboards.
  • Programmatic: Real-time impression and conversion tracking.
  • Broadcast: “A retailer saw great ROI…” (no name, no numbers, no proof).

Radio and TV are not as visible. You could say that quantitative study of the industry’s ad performance is mostly invisible to machines. And unfortunately, machines are now making the decisions.

A Critical Insight from Research: AI Can Be Misaligned by Just a Few Pages of Data

A recent experiment published in the Wall Street Journal revealed how easily today’s most advanced LLMs (like GPT-4o) can be manipulated through minimal fine-tuning. Just a few pages of data altered the model’s behavior dramatically, sometimes turning it hostile or delusional in systematic ways.

The lesson for broadcasters is urgent and direct: If a few bad inputs can turn an LLM harmful, a few good inputs can reshape it in our favor. But we must be the ones to supply those inputs.

AI’s current outputs are not neutral—they’re simply based on what’s most available and most structured. If radio and TV aren’t seen in the data, they’re erased from the model’s worldview. Just like AE Studio warned in the Wall Street Journal article: “These systems absorb everything from their training—including man’s darkest tendencies.” Or in our case: digital bias.

The Solution

Here’s the good news: this is potentially fixable. But it’s going to take an industry-wide shift in how we document, publish, and promote broadcast results.

This is a visibility and ‘data famine’ issue. LLMs can’t recommend what they can’t read. If we want broadcast media to show up in Ai-driven media plans, we need to feed the machine structured, public, performance-driven data.

  1. Publish Specific Case Studies with Real Metrics
    Start using advertiser names (with permission). Publish stories with campaign spend, flight dates, creative assets, ROI, lift in traffic, calls, sales, etc. This data has to be public and accessible for LLMs to index it.
  2. Optimize for AI, Not Just Humans
    A clever headline might win a human reader, but AI wants structure. Bullet points, KPIs, numbers. LLMs parse articles like data tables. Give them what they need to recommend you. Stop using anonymous results. Get advertiser permission. Share summary of impressions delivered, ROI, lift in traffic, and creative samples. The more concrete, the more indexable.
  3. Publish to Open Platforms the LLMs Scrape for Content
    Don’t hide your case studies behind logins or subscriptions. LLMs do not crawl login pages or gated PDFs. If your case studies are behind forms, they are invisible. Publish them on your websites, blogs, LinkedIn, Medium – anywhere that’s indexable by AI.
  4. Align Internally Around AI Visibility
    This is not just a sales problem. Programming, digital, marketing, research, and virtually every industry function must align around the importance of making broadcast effectiveness visible to the ad community, and “machines.”
  5. Retrain the Agencies and Marketers
    Buyers, planners, and marketers need to understand that AI outpurs are only as good as the inputs. They need to know how to ask for the best MMM, by including a list of the channels they’re considering (including broadcast, streaming, etc). If broadcast is not mentioned in the question, it’s less likely to be included in the answer. The way many of these LLMs are currently structured,

The Cost of Inaction: $20 Billion Is at Risk

Staying passive about If nothing changes, we estimate:

  • Radio could lose $6B in annual U.S. revenue by 2028.
  • TV could lose $15B – $20B, especially in local and regional budgets.

The ad industry would just need to incorporate AI planning into 60% of its campaigns to reach these revenue estimates.

And the more AI is used, the more these patterns stick.

The rise of LLMs presents a new reality. The tools now guide the spend. If we do not show up in the data, we do not show up in the budget.

> Read the full study here.

Daniel Anstandig is Founder and CEO of Futuri, and Tracy Gilliam is Futuri’s Chief Growth Officer. Futuri’s AI technology solutions help thousands of broadcasters worldwide grow their content, audience, and revenue. Learn more at FuturiMedia.com