What if the “high-performing” ad campaign you’re celebrating is actually powered by malware?
Mike Schrobo, CEO and Co-Founder of Fraud Blocker, has spent years looking behind the numbers—and what he sees isn’t just wasted marketing spend. It’s compromised devices, automated agents, and hidden networks quietly manipulating digital advertising systems at scale.
Ad fraud is often brushed off as a marketing inefficiency. But with online ad revenue nearing $100 billion in 2024 and global losses projected to reach $172 billion by 2028, the stakes have grown far beyond inflated clicks.
According to Schrobo, many of today’s fraud schemes rely on the same techniques used in broader cyberattacks, malware infections, stealth automation, and infrastructure designed to look legitimate. From malware-driven “ghost click farms” to AI-powered agentic browsers, he explains how modern fraud tactics blur the line between marketing risk and cybersecurity threat—and why security and marketing teams can no longer afford to operate in silos.
In his interview with The Cyber Express, Schrobo shares where platforms fall short, what defenders should really monitor, and how organizations can prepare for the next wave of automated fraud.
Read the full excerpt of the interview below.
The Cybersecurity Side of Modern Ad Fraud
TCE: Ad fraud is often treated as a marketing problem, not a security one. From your perspective, where does ad fraud clearly cross into cybersecurity risk?
Mike Schrobo: Thanks for the opportunity to speak on this topic – it’s certainly worth the attention of digital marketers and cybersecurity practitioners alike.


Both sides of the equation need to remember that ad fraud typically exploits cybersecurity weaknesses to perpetrate the scam.
When fraudsters use malware to inject clicks – inflating metrics to deliver multiple microtransactions that add up to a significant payday – they maintain a persistent presence on a device.
This most recently happened in January when malware infected Android devices, launched secret browsers, and clicked on ads without user knowledge. That same malware can serve as a backdoor for more malicious payloads, such as ransomware or credential harvesters, and therefore requires both disciplines to work together and eliminate potential pathways.
TCE: Malware-driven click fraud has been growing quietly. What does a modern “ghost click farm” look like today, and why is it harder to detect than traditional bot activity?
Mike Schrobo: Yes, there’s a big difference between the two. Traditional click farms needed warehouses of people performing repetitive tasks that mimicked human behaviors. This evolved into physical phone racks mounted on walls and managed by a central computer.
Fast forward and “farms” today exist as a distributed mesh of clean residential IP addresses powered by AI agents, malicious browser extensions, or mobile SDKs hidden within seemingly harmless apps. I’ve described them as ghost click farms because the scam doesn’t occur in a single physical location. Rather, it’s more distributed, automated, and harder to detect.
The method doesn’t rely on data centers nor carry the classic markers of inauthentic engagement. Instead, it hijacks legitimate human telemetry, mimicking variable mouse movements, realistic scroll depths, and authentic session histories that pass standard filters with ease.
Thanks to ghost click farms, scammers can scale operations at a fraction of the cost and resources required previously.
TCE: You’ve analyzed traffic across millions of IP addresses. What behavioral signals most reliably indicate malicious or automated activity that platforms tend to miss?
Mike Schrobo: Randomness.
While a bot can be programmed to mimic a human for a single visit, it’s nearly impossible to replicate the randomness of a human user over a thirty-day period.
We look for sub-perceptual signals such as mouse velocity data that bots fail to spoof accurately. Additionally, we analyze conversion time and paths, which are telltale signs of an automated script following a predefined path.
TCE: As agentic browsers and AI-driven agents enter mainstream use, how do you see attribution and identity breaking down from a security standpoint?
Mike Schrobo: It’s only getting harder to pinpoint who’s looking at what. From my perspective, we should begin tracking agent-based shoppers as a distinct traffic class – entirely separate from human browsers – because their value and conversion paths are different.
For example, if an agent researches products overnight, those browsing sessions trigger remarketing pixels. Days later, the human sees retargeted ads and converts. Should advertisers pay full price for that agent activity? Half? Nothing?
We can’t yet determine how to price agent traffic nor can we easily identify it.
Before agents take over the internet, and particularly if we want security and marketing to align, platforms must distinguish between these two “shoppers” to prevent innocent agents from being misclassified as human engagement or, worse, malicious bot activity.
TCE: You’ve advocated for ‘Know Your Agent’ guardrails. What practical standards or controls would we need to exist for this idea to work in real-world ecosystems?
Mike Schrobo: Right now, there’s no forcing function dictating agentic behavior, constraints, and liability. If agents take over everything from browsers to customer journeys, then we need guardrails baked into the technical foundation.
Similar to financial KYC, trustworthy agents require a forcing function to govern activity.
It could be something like requiring agents to cryptographically sign credentials or risk being blocked at the back-end. Think of it like SSL certificates for web traffic – agents would first need to show their credentials before interacting with ad systems or completing transactions.
Simply put, we need full transparency into conversions so we can target them differently, and complete transparency into what agents can do within ecosystems. Both ends are best achieved by hard-and-fast rules.
TCE: Large platforms claim to be addressing invalid traffic yet the click trust gap persists. Where do you believe current platform-reported metrics fall short from a risk and transparency perspective?
Mike Schrobo: Current metrics fall short for marketers because ad platforms have a massive conflict of interest.
Their revenue is directly tied to traffic volume so aggressive fraud removal reduces their bottom line and available inventory to sell. Further, as both the seller and the auditor of ad space, platforms are incentivized to overlook sophisticated fraud that inflates engagement metrics and ensures client budgets are fully spent.
In November, Reuters reported that Meta’s internal projections found 10% of its 2024 revenue would come from ads for scams and banned goods.
That creates a significant trust gap for marketers who should use ad fraud prevention tools to help close.
TCE: From a defender’s viewpoint, what should security teams monitor that goes beyond standard fraud detection tools or platform dashboards?
Mike Schrobo: Yes, first thing’s first, marketing and security defenders can begin by implementing more powerful ad fraud prevention tools. With the right software, companies can better defend and prevent fake clicks. Then dig deeper into your metrics and look for irregular patterns.
If 30% of your clicks suddenly originate from a country or region that doesn’t match your customer base, that’s a red flag. The same goes for behavioral consistency over time.
If the same device signature is hitting multiple campaigns across different platforms within seconds, you’re likely looking at automation. Platforms won’t connect these dots for you because they only see their own traffic.
Also keep a close eye on sales and return on ad spend. Traditional metrics such as clicks and conversions can be easily manipulated by bots to suggest campaign success. However, sales performance is difficult to fake, so watch for changes in your bottom line and ensure they align with the data. Monitor the fundamentals and make your decisions from there.
TCE: Looking ahead to 2026, what emerging trends in automated fraud concern you the most and what would you advise organizations to prepare for now?
Mike Schrobo: Agentic AI is probably the most concerning channel for ad fraud. This is because agents can create highly sophisticated bots with unique fingerprints at a level we’ve never seen before. They also exhibit greater randomness and display human-like behavior – two things that make click fraud attempts more likely to succeed.
It’s important to stress this isn’t on the horizon – it’s here. As evidenced by the recent rapid rise of OpenClaw (formerly known as ClawdBot and MoltBot), malicious actors are already exploiting agents to spread malware via prompt injection.
Again, this matters to both the security and marketing departments, and requires organizations across the board to shift from “detecting fraud” reactively to “verifying intent” in real time.
