Meta is facing new accusations that it is not doing enough to prevent the spread of hate speech and violent content through Facebook ads. A recent investigation shows eight such commercials, targeting European audiences, that were allowed despite flagrant violations of the company’s hate speech and violence regulations.
The study comes from the watchdog organization Ek, which is revealing its findings in order to call attention to the social network’s “sub-standard moderation practices” ahead of the Digital Services Act (DSA) taking force in Europe later this week.
It describes how the organization attempted to buy 13 Facebook ads over a few days in early August, all of which used AI-generated graphics and included content that was obviously against the company’s guidelines.
Ek removed the adverts before any users could see them. The group asked that the precise wording of the advertisements be withheld, but provided descriptions of some of the most severe cases.
Approved ads included one in France that “called for the execution of a prominent MEP because of their stance on immigration,” as well as one targeting German users that “called for synagogues to be burnt to the ground to ‘protect White Germans.'”
Meta also approved advertisements in Spain claiming that the most recent election was rigged and that people should join in violent protests to overturn it.
“This report was based on a very small sample of ads and is not representative of the number of ads we review daily across the world,” a spokesperson for Meta said in a statement.
“Our ads review process has several layers of analysis and detection, both before and after an ad goes live. We’re taking extensive steps in response to the DSA and continue to invest significant resources to protect elections and guard against hate speech as well as against violence and incitement.”
While a few ads were halted by Meta’s checks, Ek claims that they were stopped because they were classified as political, not because of the violent and hate-filled content in them. (Political advertisers must go through an additional vetting process before they can place advertising with the company.)
Ek is using the research to lobby for greater safeguards under the DSA, a broad law that mandates tech companies to limit certain types of targeted advertising and provide consumers the option to opt out of recommendation algorithms.
(Several platforms, including Facebook, Instagram, and TikTok, have lately changed their policies to meet with the latter requirement.) It also calls on platforms to detect and minimize “systemic risks,” such as those associated with unlawful and violent content.
“With a few clicks, we were able to prove just how easy it is for bad actors to spread hate speech and disinformation,” Vicky Wyatt, Ekō’s campaign director, said in a statement. “With EU elections around the corner, European leaders must enforce the DSA to its fullest extent and finally rein in these toxic companies.”