Hiding hate in plain sight: How social media sites misinterpret sarcasm

Sara Aniano was reading a racist screed allegedly written by the Buffalo, New York, shooting the suspect when a friend pointed to the infographic in the 180-page document. Aniano, a disinformation researcher, recalled seeing the same content on an Instagram account she started following last year.

The graphics about the supposed influence of the Jewish people came from a website claiming to celebrate the achievements of Jews. The website had several social media accounts, including one on Instagram. But Aniano suspected the website of trying to hide its true intentions — fueling hatred of Jews — by using tongue-in-cheek language.

A chart showed the number of Jewish presidents at Ivy League universities, implying that the Jewish people control education. Another said Jews had largely pioneered modern American liberalism. The graphics appear designed to bolster a false conspiracy theory that Jews are trying to replace white Americans with non-white immigrants, a fraudulent claim that appears to have been part of the motivation for the murder of 10 black people in a Buffalo supermarket.

“At the end of the day, of course, they don’t celebrate these people,” said Aniano, who has spoken with CNET in the past and now works for the Anti-Defamation League. “They put targets on their backs.”

Aniano reported the account to Instagram several times last year, but it remained online. In June, the Meta-owned social media platform took down the account, which had been online for two years and had around 18,000 followers, after CNET inquired about the account in late May. The company said the site violated its online rules, but did not specify which ones. (CNET does not name the account to prevent driving traffic to the affiliate website.) The website did not respond to a request for comment.

Instagram’s slow response underscores the challenges social networks face when policing content that uses humor, sarcasm or irony to disguise its true motives. Social networks have long struggled to balance freedom of expression and safety online, a task that has only grown worse as extremists try to evade detection. As the 2022 US midterm elections approach, extremist violence is a growing concern.

From January to March, instagram took action against 3.4 million pieces of hate speech content. The flood of posts on the Meta-owned social network means that not all reports are reviewed by a human moderator. Instead, Instagram relies on automated technology that can’t always detect irony. Even human moderators can struggle to determine a user’s intent, making irony an effective tool to evade detection.

The account reported by Aniano did not generate a human review. “Our technology has revealed that this account likely does not violate our Community Guidelines,” the platform initially said in what appeared to be an automated response.

Meta, the parent company of Facebook and Instagram, relies on a mix of human moderators and automated technologies to police content. The social network has tried to improve artificial intelligence to better understand the connection between words and images in memes, which can use inside jokes.

A slow response

Instagram is not the only social media service used by the website for promotion. Accounts have appeared on Facebook, Twitter, TikTok, Telegram, 4chan and Patreon. The Buffalo shooter could have encountered site material on any of them.

“It surprised me in the sense that I was right about his influence,” Aniano said. “But it didn’t surprise me that someone so deep into hate speech and anti-Semitism took hold of it.”

Investigators work at the scene of a mass shooting at a grocery store in Buffalo, New York in May.

A white gunman shot dead 10 people at a grocery store on May 14 in a historically black neighborhood of Buffalo, New York. The shooting is being investigated as a hate crime and an act of racially motivated violent extremism.

Kent Nishimura/Los Angeles Times via Getty Images

Katie McCarthy, a researcher at the Anti-Defamation League’s Center on Extremism, said content from the website had been used to promote anti-Semitic tropes online. But the ADL has struggled to determine who runs the site, who is registered by Withheld for confidentiality, an Iceland-based privacy service. For example, the ADL was unable to confirm the site’s claim that it is run by two Jewish people.

Using humor and sarcasm allows extremists to circumvent bans, McCarthy said. They can always “say they’re just joking,” she said, adding that it’s a form of “plausible deniability.”

Meta responded slower than other platforms. In June, the company also deleted a Facebook account for the website which had 2,100 followers. On Instagram, users asked questions in the account’s comments about why it highlighted some controversial figures, such as American producer Harvey Weinstein, who was sentenced to 23 years in prison for sex crimes. One Instagram user said the account was “a sarcastic page”, while another said it was run by “anti-Semites” posing as Jews.

On Instagram, the website directed people to a Patreon page that asked users to contribute $2 a month to support the creation of “blogs and tools to fight misinformation.” Patreon said it removed the account for violating its “hate speech guidelines by propagating negative stereotypes and segregative content. »

Screenshots of the website’s Twitter account archived in The Wayback Machine show that the company suspended it by December 2021. Twitter said the account violated its ban evasion policy which “attempts to circumvent prior application, in particular through the creation of new accounts”. The website created the account in 2020 and had over 15,000 Twitter followers at the time the user was suspended. Twitter also hides direct messages with the website with a notice that it might be “suspicious”. Users can still click on the notice to view the message.

The website also had a TikTok account with 331 followers, but did not post as frequently on the short-form video app. A video about Jews in US President Joe Biden’s administration has been viewed 28,600 times. At the end of May, TikTok removed the account for violating its rules, although it did not specify which ones.

The website still shares content on the Telegram messaging platform, where it has around 10,000 subscribers. Telegram did not respond to a request for comment.

Deleting these accounts, Aniano said, helps reduce the spread of the toxic message.

“Removing it doesn’t mean the ideology disappears,” she said. “But it gives the general public fewer opportunities to access it themselves.”

The site’s graphics include crude stereotypes about the wealth and influence of the Jewish people. For example, one shows photos of 25 hedge fund managers pointing out that two-thirds of them are Jewish, an unsubtle suggestion that Jews are obsessed with money.

People can easily download the graphics and use them as memes circulating the internet. In some of them, photos of Jewish people are visible, while images of non-Jews are hidden.

A persistent problem

Extremists can also create fake social media accounts to hide their intentions, a problem social media has struggled with for years. Facebook and other social networks focus on the behavior of a network of accounts rather than its content when cracking down on attempts to manipulate public debates.

In 2019, American journalist Yair Rosenberg tweeted that extremists were using the anonymous picture board site 4chan to urge people to create a “massive movement of fake Jewish profiles on Facebook, Twitter” and other social networks, noting that he had “the advantage to be uncensored by big tech”.

In 2020, CNN reported that a fake Twitter account claiming to be Jewish attempted to fuel tensions between Jews and black Americans.

Kesa White, program research associate at American University’s Polarization and Extremism Research and Innovation Lab, said moderating tongue-in-cheek content can be difficult because extremists have also edited memes to escape. to detection.

It is not enough, she said, for social media platforms to rely on artificial intelligence or human moderators. They need experts, such as researchers who are embedded in these communities and know what to look for.

“There are so many different layers that change every day, which makes it much harder for researchers and social media companies to track,” White said.

CNET’s Oscar Gonzalez contributed to this report.