Racist and violent ideas move from web margins to mainstream sites

On March 30, the young man accused of the mass shooting at a Tops grocery store in Buffalo, New York surfed an assortment of racist and anti-Semitic websites online. On BitChute, a video-sharing site known for harboring right-wing extremism, he listened to a lecture on the decline of the American middle class given by a Finnish extremist. On YouTube, he found an eerie video of a car driving through black Detroit neighborhoods.

Over the next week, his online writing shows, he lingered in stealth chat rooms on Reddit and 4chan but also read articles about race in HuffPost and Medium. He watched local television reports of horrific crimes. He flipped between “documentaries” on extremist websites and gun tutorials on YouTube.

The young man, who was indicted by a grand jury last week, was portrayed by authorities and some media as a troubled outcast who acted alone when he killed 10 black people in the grocery store and injured three others . In fact, he dwelt in numerous online communities where he and others consumed and shared racist and violent content.

As the number of mass shootings increases, experts say many of the disturbing ideas that fuel the atrocities are no longer relegated to a handful of dark corners hard to find on the web. More and more outlets, both fringe and mainstream, are hosting bigoted content, often in the name of free speech. And the inability – or unwillingness – of online services to contain violent content threatens to draw more people to hateful posts.

Many images and texts that the young man had in his many writings, which included a diary and a 180-page “manifesto”, have circulated for years online. Often they have infiltrated some of the most popular sites in the world, like Reddit and Twitter.

His path to radicalization, illustrated in these documents, reveals the limits of efforts by companies like Twitter and Google to moderate posts, images and videos promoting extremism and violence. There’s enough content left to open a pipeline for users to find more extreme websites with a click or two.

“It’s pretty prolific on the Internet,” said Eric K. Ward, senior fellow at the Southern Poverty Law Center, who is also executive director of the Western States Center, a nonprofit research organization. “He’s not just going to fall into your lap; you have to start looking for it. But once you start looking for it, the problem is that it starts raining down on a person in abundance.

The Buffalo attack has renewed attention on the role that social media and other websites continue to play in acts of violent extremism, with criticism coming from the public as well as government officials.

“The fact that this act of barbarism, this execution of innocent human beings, can be broadcast live on social media platforms and not be taken down in a second tells me that there is accountability there,” said Governor Kathy Hochul of New York. after the shooting in Buffalo. Four days later, State Attorney General Letitia James announced that she had opened an investigation into the role played by the platforms.

Facebook highlighted its rules and policies that prohibit hateful content. In a statement, a spokesperson said the platform detects more than 96% of content related to hate organizations before it is flagged. Twitter declined to comment. Some of the social media posts on Facebook, Twitter and Reddit identified by The New York Times through reverse image searches have been removed; some of the accounts that shared the images have been suspended.

The man accused of the murders, Payton Gendron, 18, detailed his attack on Discord, a chat app that emerged from the gaming world in 2015, and streamed it live on Amazon-owned Twitch. The company managed to take down his video within two minutes, but many of the misinformation sources he cited remain online to this day.

His paper trail provides chilling insight into how he staged a deadly online assault, collecting advice on weapons and tactics and finding inspiration in other racists and previous attacks he has widely imitated with his. Overall, the content formed a twisted and racist take on reality. The shooter saw ideas as an alternative to mainstream opinions.

“How can a shooter like me be stopped, you ask? he wrote on Discord in April, more than a month before filming. “The only way is to prevent them from learning the truth.”

His writings map in detail the websites that motivated him. Much of the information he gathered in his writing involved links or images he chose to match his racist views, reflecting the type of online life he lived.

On his own, the young man’s radicalization began after the onset of the COVID-19 pandemic, when he was largely confined to his home like millions of other Americans. He described getting his news mostly from Reddit before joining 4chan, the online message board. He followed topics about guns and the outdoors before finding another devoted to politics, eventually settling in a place that allowed for a toxic mix of racist and extremist misinformation.

Although he frequented sites like 4chan known to be on the fringes, he also spent a lot of time on mainstream sites, according to his own records, particularly YouTube, where he found graphic scenes of police cameras and videos describing gun tips and tricks. As the day of the attack approached, the shooter watched more YouTube videos about mass shootings and police engaged in shootings.

YouTube said it reviewed all videos that appeared in the log. Three videos were removed for linking to websites that violated YouTube’s Firearms Policy, which “prohibits content intended to instruct viewers on how to build guns, make accessories that convert a firearm to automatic fire or live stream content that shows someone handling a firearm,” according to YouTube spokesperson Jack Malon.

At the center of the shooting, like others before it, was a false belief that an international Jewish conspiracy intends to supplant white voters with immigrants who will gradually take political power in America.

The conspiracy, known as the “Great Replacement Theory,” has roots that date back at least to the Tsarist Russian anti-Semitic hoax called “The Protocols of the Elders of Zion,” which claimed to be a Jewish plot to overtake Christianity in Europe. .

It resurfaced more recently in the works of two French novelists, Jean Raspail and Renaud Camus, who, four decades apart, imagine waves of immigrants taking power in France. It was Camus, a far-right socialist-turned-populist, who popularized the term “the great replacement” in a 2011 novel of the same name.

Gendron, from the documents he posted, appeared to have read none of them; instead, he attributed the notion of a “great replacement” to writings posted online by the gunman who murdered 51 Muslims at two mosques in Christchurch, New Zealand, in 2019.

After that attack, New Zealand Prime Minister Jacinda Ardern led an international pact, called the Christchurch Call, which saw the government and big tech companies commit to eliminating terrorist and extremist content online. Although the agreement carries no legal sanctions, the Trump administration has refused to sign, citing the principle of free speech.

Gendron’s online experience shows that the writings and video clips associated with the Christchurch shooting remain available to inspire further acts of racially motivated violence. He referred to both several times.

The Anti-Defamation League warned last year that the “great replacement” had moved from the fringes of white supremacist beliefs to the mainstream, highlighting protesters’ chants at the 2017 “Unite the Right” rally in Charlottesville, Virginia, which erupted into violence and Tucker Carlson’s comments on Fox News.

“Most of us don’t know the original story,” said Ward of the Southern Poverty Law Center. “What we know is the narrative, and the narrative of the Great Replacement Theory has been legitimized by elected officials and figures to such an extent that the origins of the story no longer need to be told. People are starting to understand it as if they could understand conventional wisdom. And that’s the scary thing.”

Despite the best efforts of some major social media platforms to moderate online content, the algorithms they use – often intended to show users posts they will read, watch and click on – can accelerate the spread of misinformation. and other harmful content.

Media Matters for America, a liberal-leaning nonprofit, said in May that its researchers had found at least 50 ads on Facebook over the past two years promoting aspects of the “great replacement” and themes. related. Many of the ads came from candidates for political office, even though the company, now known as Meta, announced in 2019 that it would ban white nationalist and white separatist content from Facebook and Instagram.

The organization’s researchers also found that 907 posts on the same themes on right-wing sites attracted more than 1.5 million engagements, far more than posts intended to debunk them.

Although the video of Gendron’s shooting was removed from Twitch, it resurfaced on 4chan, even while he was still at the crime scene. The video has since spread to other fringe platforms like Gab and eventually to mainstream platforms like Twitter, Reddit and Facebook.

The advent of social media has allowed, in a relatively short period of time, nefarious ideas and conspiracies that once simmered in relative isolation to proliferate in society, bringing together people driven by hatred, said Angelo Carusone, president of Media Matters for America.

“They are no longer isolated,” he said. “They were connected.”

[This article originally appeared in The New York Times.]