Site icon Occasional Digest

Leaked Docs Reveal Meta Cashing In on a ‘Deluge’ of Fraudulent Ads

Occasional Digest - a story for you

Meta anticipated earning about 10% of its total annual revenue, or $16 billion, from advertising for scams and banned items, according to internal documents reviewed by Reuters. The documents reveal that for at least three years, the company failed to stop a significant number of ads exposing its billions of users on Facebook, Instagram, and WhatsApp to fraudulent schemes, illegal casinos, and banned medical products. On average, around 15 billion “higher risk” scam ads, showing clear signs of fraud, were displayed daily on these platforms. Meta reportedly generates about $7 billion annually from these scam ads.

Many of these ads were linked to marketers flagged by Meta’s internal systems. However, the company only bans advertisers if fraud is at least 95% certain according to its systems. If less certain but still suspect, Meta imposes higher ad rates as a penalty instead of outright banning them. This approach aims to deter dubious advertisers without fully eliminating them. The company’s ad-personalization system also ensures that users who click on scam ads see more of them based on their interests.

The documents create an image of Meta grappling with the extent of abuse on its platforms while hesitating to take stronger actions that could impact its revenue. The acceptance of revenue from suspicious sources highlights a lack of oversight in the advertising industry, as noted by fraud expert Sandeep Abraham. Meta’s spokesperson, Andy Stone, counters that the documents provide a biased view and argues that the actual share of revenue from scam ads would be lower than estimated. He claimed the plan aimed to validate investments in combating fraud.

Stone mentioned that Meta has significantly reduced user reports of scam ads globally and removed millions of scam ad content in recent efforts. The company aims for major reductions in scam ads in the upcoming year. Despite this, internal research indicates that Meta’s platforms are central to the global fraud economy, with one presentation estimating they contribute to a third of all successful fraud in the U. S. Competitors were noted to have better systems to combat fraud.

As regulators step up pressure for stronger consumer protections, the documents reveal the U. S. Securities and Exchange Commission is investigating Meta for financial scam ads. In Britain, regulators identified Meta as the source of over half of the payment-related scam losses in 2023. The company has acknowledged that addressing illicit advertising may hurt its revenue.

Meta is investing heavily in technology and has plans for extensive capital expenditures in AI. CEO Mark Zuckerberg reassured investors that their advertising revenue can support these projects. The internal documents suggest a careful consideration of the financial impact of increasing measures against scam ads, indicating that while the company intends to reduce illicit revenue, it is wary of the potential business implications.

Despite planning to diminish scam ads’ revenue share, Meta is bracing for regulatory fines, estimating penalties that could reach up to $1 billion. However, these fines are viewed as comparatively minor against the income from scam ads, which already generates significant revenue. The leadership’s strategy shows a tendency to react to regulatory pressure rather than implementing proactive measures to vet advertisers effectively. Stone disputed claims that Meta’s policy is to act only under regulatory threat.

Meta has set limits on how much revenue it can afford to lose from actions against suspect advertisers. In early 2025, a document revealed that the team reviewing questionable ads was restricted to a loss of no more than 0.15% of company revenue, which equated to around $135 million from Meta’s total of $90 billion in the same period. A manager noted that this revenue cap included both scam ads and harmless ads that might be mistakenly blocked, indicating strict financial boundaries in their approach.

Under increasing pressure to manage scams more effectively, Meta’s executives proposed a moderate strategy to CEO Mark Zuckerberg in October 2024. Instead of a drastic approach, they suggested targeting countries where they anticipated regulatory action. Their goal was to reduce the revenue lost to scams, illegal gambling, and prohibited goods from approximately 10.1% in 2024 to 7.3% by the end of 2025, with further reductions planned for subsequent years.

A surge in online fraud was noted in 2022, when Meta uncovered a network of accounts pretending to be U. S. military members trying to scam Facebook users. Other scams, such as sextortion, were also rising. Yet, at that time, Meta invested little in automated systems to detect such scams and categorized them as a low-priority issue. Internal documents showed efforts were mainly focused on fraudsters impersonating celebrities, which threatened to alienate advertisers and users alike. However, layoffs at Meta affected the enforcement team, as many working on advertiser rights were let go, and resources shifted heavily toward virtual reality and AI projects.

Despite layoffs, Meta claimed to have increased its staff handling scam advertising. However, data from 2023 revealed that Meta was ignoring about 96% of valid scam reports filed by users, suggesting a significant gap in their response to customer concerns. The safety staff aimed to improve this by reducing the number of dismissed reports to no more than 75% in the future.

Instances of user frustration were evident, such as a recruiter for the Royal Canadian Air Force who lost access to her account after being hacked. Despite multiple reports to Meta, her account remained active, even sharing false cryptocurrency investment opportunities that defrauded her connections. Reports indicated that she had many people flag her account, but it took about a month before Meta finally removed it.

Meta refers to scams that do not involve paid ads as “organic,” which include free classified ads, fake dating profiles, and fraudulent medical claims. A report from December 2024 stated that users face approximately 22 billion organic scam attempts each day, alongside 15 billion scam ads, highlighting the company’s ongoing struggle to manage fraud effectively. Internal documents suggest that Meta’s efforts to police fraud are not capturing much of the scam activity occurring across its platforms.

In Singapore, police shared a list of 146 scams targeting local users, but Meta staff found that only 23% of these scams broke the platform’s policies. The remaining 77% went against the spirit of the rules but not the exact wording. Examples of unchecked scams included fake offers on designer clothes, false concert tickets, and job ads pretending to be from major tech firms. In one case, Meta discovered scam ads claiming to belong to the Canadian prime minister, yet the existing rules wouldn’t flag the account.

Even when advertisers are found to be scamming, the rules can be lenient. Small advertisers need to be flagged for scams eight times before being blocked, while larger ones can have over 500 complaints without being shut down. Some scams generated significant revenue; for example, four removed ads were linked to $67 million monthly.

An employee initiated reports highlighting the “Scammiest Scammer” each week to raise awareness, but some flagged accounts remained active for months. Meta tried to deter scammers by charging them more in ad auctions, labeling this practice “penalty bids. ” Advertisers suspected of fraud would have to bid higher amounts, thus reducing competition for legitimate advertisers. Meta aimed to decrease scam ads from this approach, which showed some success, resulting in fewer scam reports and a slight dip in overall ad revenue.

With information from Reuters

Source link

Exit mobile version