As digital marketers gather insights, optimize campaigns, and analyze performance data, there’s a growing invisible threat undermining their efforts: AI-powered bot traffic. In an era where automation and artificial intelligence are being woven into every layer of marketing, the dark side of this innovation is catching many off guard. And Google isn’t going to save you.
Meet Charlie – A Familiar Frustration
Let’s start with Charlie, a typical performance marketer. Charlie’s campaigns were showing an unusual spike in clicks, but conversions had dropped off dramatically. The usual fixes didn’t help, and the data wasn’t adding up.
Does this sound familiar? If you’ve ever had a sales team question your leads despite a great-looking week in your analytics dashboard, you’re not alone. After some digging, Charlie discovered a high volume of bot traffic—far more than was being indicated by Google’s “invalid traffic” metrics. The culprits? Sophisticated bots flying under the radar of most ad platforms’ detection systems.
What Is Sophisticated Invalid Traffic (SIVT)?
At its core, invalid traffic includes any interaction with ads that doesn’t come from a real user with genuine interest. This traffic doesn’t convert, wastes ad spend, and can pollute campaign data.
There are two main categories:
- General Invalid Traffic (GIVT): Think known crawlers, duplicate clicks, and bots that declare themselves. Platforms like Google are generally good at filtering this out.
- Sophisticated Invalid Traffic (SIVT): The dangerous stuff. These bots disguise themselves as real users and often go undetected. They can include:
- Data scrapers and stealth crawlers
- CAPTCHA-farming bots
- Competitor clicks
- Users incentivized to click ads
- Bots designed using generative AI to mimic human behavior
- Data scrapers and stealth crawlers
The rise of AI tools has made it easier than ever for bad actors—even those without coding skills—to create stealthy bots that game ad systems, spoof engagement, and wreak havoc.
You Can’t Stop What You Can’t See
SIVT is well camouflaged. Like snakes hiding in the grass, they blend in—undetectable to standard systems. Most ad platforms offer limited visibility into the true nature of your traffic, and sophisticated bots exploit that blind spot.
Google’s automation trend is exacerbating this issue, stripping marketers of control and visibility. The result? Marketers are optimizing campaigns based on fake data, wasting budgets, and drawing incorrect conclusions about what works.
How Big Is the Problem?
At Lunio, we analyze ad traffic across industries and platforms. From our March dataset, average invalid traffic rates sit between 10.5% and 17.1%* depending on platform. This issue is costing advertisers over $100 billion annually, and some forecasts push that number as high as $172 billion.
For example, if you’re spending $1 million on ads and just 10% of that traffic is invalid, that’s $100,000 gone. Factor in missed conversions and polluted data, and your real losses could be triple that amount.
Why Google (Probably) Won’t Help
To Google’s credit, they’ve done more than most platforms to combat invalid traffic. Their invalid click filters and refund policies were ahead of their time back in the 2000s. But their investment in SIVT prevention has not kept pace.
Limitations include:
- Lack of transparency: You get basic metrics about invalid clicks—but not where they came from, when they occurred, or what campaign they affected.
- No proactive blocking: Bot traffic may be refunded after the fact, but they’re still free to visit your site, fill forms, and distort analytics.
- Limited exclusions: Google allows a max of 500 IP exclusions per campaign, which is insufficient for high-volume advertisers.
- Minimal innovation: There’s been little recent development in Google’s public-facing SIVT detection tools.
Why the inertia? As court documents from Google’s ongoing antitrust lawsuit reveal, the platform prioritizes revenue over traffic quality. Invalid traffic, especially SIVT that isn’t obviously fraudulent, still generates ad spend.
What You Can Do: Awareness, Action, Advocacy
While Google may not save you, you can save yourself by following this three-step process:
1. Awareness
Start by investigating your traffic quality:
- Watch for spam form fills or sudden changes in click/conversion ratios
- Analyze by location, device, and time of day—bot traffic often leaves unusual patterns
- Look into session behavior: high bounce rates, odd user agents, or old operating systems across different IPs
Red flags? Investigate further. As Dr. Augustine Fou says: “When things are too high, too low, or too consistent—investigate.”
2. Action
Use all the tools you have:
- Segment campaigns to isolate and exclude bad traffic (e.g., split by device or region)
- Rotate and prioritize IP exclusions to make the most of limited slots
- Adjust bids and targeting based on traffic quality signals
Even though tools are limited, these actions can significantly reduce waste if executed well.
3. Advocacy
If you’re a big spender, Google will listen—but only if you push. Use your own traffic data to demonstrate problems and demand expanded exclusion options or transparency.
Even smaller advertisers can benefit from collective pressure, as seen with recent improvements like expanded negative keyword capabilities in Performance Max campaigns.
Final Thoughts: Don’t Wait for a Savior
Sophisticated invalid traffic is rising, getting smarter, and increasingly hard to detect. Ad platforms won’t fix this for you—it’s up to marketers to take proactive control of their ad traffic.
The best weapon we have right now? Exclusions. They’re not perfect, but they work—so use them strategically, push for more flexibility, and invest in SIVT detection either in-house or via trusted third parties.
Because when it comes to ad fraud in 2025—visibility equals power, and turning a blind eye means burning your budget.
This article is based on a talk given by Dani Mansfield, Lead Product Manager at Lunio.Ai, at Hero Conf UK in April 2025. You can watch a recording of the talk below.
*Correction: In the talk, Dani reported rates around 20% based on an available sample of ad accounts. Lunio’s data science team has since rerun the numbers with a more comprehensive data sample and the average ranges between 10.5% and 17.1%, depending on platform. This doesn’t include duplicate clicks or known crawlers—types of general invalid traffic typically filtered out by the platforms.