Over the last few weeks, a series of stories by The Guardian, CBC, and The Logic has highlighted an explosion of fraudulent Facebook and Instagram ads that mimic the likeness of high-profile Canadian politicians and news outlets. Deepfake videos and counterfeit CBC-style headlines show Mark Carney, Pierre Poilievre, and other public figures promoting cryptocurrency “programs” or instant-wealth schemes.
Although Meta’s moderators eventually remove many of these impostor spots, these scams continue to thrive on Facebook and Instagram even as you read this. To stem the anticipated rise of deepfakes on their platforms ahead of the April 28 federal election, Meta recently introduced a new rule requiring advertisers to declare AI-generated or manipulated political content. However, these measures rely on scammers’ honesty and are reactive, arriving far too late to protect users.
With Canada’s election just days away, the continued appearance of deepfake ads reveals a serious flaw in Meta’s ad-review system. If fraudsters can repeatedly bypass detection, it suggests the platform’s current safeguards are not equipped to catch even basic forms of manipulation, let alone more advanced influence operations. This vulnerability poses a real risk to election integrity and highlights the urgent need for stronger oversight and transparency.
To shed more light on this vulnerability in our information ecosystem, the Social Media Lab has been examining the tactics used by scammers. What we found reveals a worrying vulnerability in the platform’s ad infrastructure.

A sample scam ad

A fake “news” website

theseeker [dot] ca
tokensailive [dot] com
mapleinfo [dot] club
funkywifi [dot] com
news.investingcanada [dot] org
A Look Behind the Curtain

To advertise on Meta’s platforms, an advertiser must first create a Facebook page. To start their scheme, most scammers typically set up dozens of Facebook pages, along with fake websites that closely mimic legitimate ones, such as news organizations or government agencies. These are used in tandem to trick users and bypass Meta’s automated systems and human moderators. But creating throwaway Facebook pages and fake websites is just the first step.
We’ve now identified a more advanced tactic, which we call “Chameleon ads“. After setting up their Facebook pages and advertiser accounts, scammers initially upload harmless-looking ads for approval. Once the ad is approved, they quietly swap out the content, replacing images, text, or links with something entirely different. For instance, an ad that initially promotes running shoes might be altered to feature a fake endorsement from a prominent Canadian politician, linking to a cryptocurrency scam.
By making these changes after the ad is approved, scammers evade detection, at least temporarily. To further avoid scrutiny, they may pause the campaign after a short period and revert the advertisement to its original, innocuous version. So when a victim reports the scam and a Meta reviewer checks the ad, it would appear harmless, like a stock photo of sneakers. With limited time to review each case, moderators would likely miss such deceptive, shape-shifting ads, allowing them to continue running unnoticed.

Here is a screenshot of a sample ‘chameleon’ ad that features six different creative versions, showcasing a range of shifting images. Four of these images depict running shoes, while the other two are photos of well-known Canadian politicians.
Although the ad has been marked as “Active” since March 14, 2025, it wasn’t continuously running. In fact, when we looked at the delivery data, it was only live for a total of 22 hours, spread intermittently over that period. This stop-and-start pattern is often a red flag.
At the bottom of the ad’s detail view, we can see the various images that were cycled through in connection with this single ad campaign. Some are completely benign, like stock photos of athletic shoes, but others appear to be misleading, particularly those featuring Canadian political figures.
This behaviour suggests the ad may have been run programmatically, possibly using automated tools to dynamically swap out creatives. The lack of any political disclosure attached to the ad, despite the use of politicians’ likenesses, raises further concerns. It appears to be a deliberate tactic to evade Meta’s content moderation systems, which rely on both automated detection and human review. By disguising the true intent of the ad and keeping its active periods short and sporadic, the perpetrators are likely trying to stay under the radar and avoid enforcement actions.
The Vanishing Trail: How Scam Ads Disappear Without a Trace
When scam ad campaigns are finally detected and removed, Meta often deletes the entire associated Facebook page along with the ads. And that’s a serious problem.
Although Meta has made efforts to improve transparency by archiving political and issue-based ads in its Ad Library, the archiving doesn’t apply to standard “marketing” ads, even if they turn out to be deceptive, exploitative, or outright harmful.
Meta’s decision not to archive non-political but potentially harmful ads creates a significant gap in its transparency efforts. As a result, scam ads that masquerade as regular marketing content can vanish without a trace, with no record for the public, researchers, or regulators to review. The lack of archives not only erases critical evidence but also makes it harder to hold perpetrators accountable and improve future detection. Scammers are exploiting this blind spot, and the consequences go far beyond just a few misleading clicks.
In our research, the only evidence we were able to preserve came from screenshots we captured manually. Without these, there would be virtually no way to reconstruct the ad’s messaging, the imagery used, or the behavioural patterns that allowed it to bypass moderation.

What Must Change: Closing the Gaps in Meta’s Ad Infrastructure
The tactics uncovered in our investigation reveal how easily bad actors can exploit automated ad tools and API access to operate at scale, rapidly launching, modifying, and retracting deceptive ad campaigns before they’re detected. This suggests that Meta’s ad library and moderation systems are not equipped to handle the level of manipulation we are now seeing.
These scammers are not just slipping through the cracks; they are actively leveraging the platform’s architecture to stay one step ahead. By using innocuous marketing content as a front, they are able to push out misleading or malicious material that bypasses both automated checks and human moderation.
This reveals a systemic vulnerability. While Meta’s transparency efforts have focused on political and issue-based ads, there is a glaring oversight when it comes to “regular” marketing ads, especially those later found to be harmful. These ads aren’t archived, meaning once they’re removed, they’re effectively erased from public record. If the data is preserved, it will allow researchers like us to use the data to create transparency tools such as our Polidashboard.org, an app for tracking political, election-related, and social issue ads on Meta’s platforms.
To close this loophole on “chameleon” ads, Meta needs to expand transparency measures across all ad types, not just those that are politically sensitive. Advertisements that use marketing as a cover for deception should be preserved, documented, and accessible for analysis. Investigators, researchers, and regulators are left scrambling to keep up in a system that gives scammers the upper hand, while the public stays uninformed for even longer.