Two state senators are proposing a new way to deal with hateful or violent content posted on social media sites.
State Senator Brad Hoylman, D-New York, introduced the S.7568. The legislation, which is co-sponsored by Sen. Anna Kaplan, D-Carle City, creates a cause of action under the General State Duties Act against anyone who creates or contributes to a condition that endangers public health or safety by promoting content the person knew was harmful, false or illegal. That person could be sued by the state attorney general, a municipal corporation attorney, or a private citizen. The legislation was introduced in late December, but was transferred to the Senate Judiciary Committee on January 5. State Sen. George Borrello, R-Sunset Bay, is a member of the Judiciary Committee.
The type of messages that Hoylman and Kaplan target include:
¯ messages advocating the use of force, which are intended to incite or produce imminent lawless action;
¯ advocating self-harm, are directed to induce or produce imminent self-harm; and
¯ false statements of fact or fraudulent medical theory likely to endanger the safety or health of the public.
“Social media algorithms are specially programmed to spread disinformation and hate speech to the detriment of the public good. Prioritizing this type of content has real costs to public health and safety,” Hoylman said in a press release. “So when social media spreads anti-vaccine lies and helps domestic terrorists plan a riot in the United States Capitol, they must be held accountable. Our new legislation will hold social media companies accountable for the dangers they promote.
Under the Communications Decency Act of 1996, digital platform companies enjoy legal protection both for the content they post and for the removal of posts they find offensive. Protection from lawsuits and lawsuits applies to social media posts, uploaded videos, restaurant or doctor user reviews, classified ads – or the underworld of thousands of websites that profit from false and defamatory information about individuals.
Section 230 of the law, which outlines the shield, was enacted when many of the most powerful social media companies didn’t even exist. This allowed companies like Facebook, Twitter and Google to grow into the giants they are today.
Frances Haugen, a former Facebook employee and whistleblower, presented a case in October that Facebook’s systems amplify hate and extremism online and fail to protect young users from harmful content. Her previous revelations have boosted legislative and regulatory efforts around the world aimed at cracking down on Big Tech, and she recently made a series of appearances before European lawmakers and officials crafting rules for social media companies.
Facebook and other social media companies use computer algorithms to rank and recommend content. They govern what appears on users’ News Feeds. Haugen’s idea is to remove protections in cases where dominant algorithm-driven content promotes massive user engagement over public safety.
Meta Platforms, the new name for Facebook’s parent company, declined to comment on specific legislative proposals. The company advocated for updated regulations.
“We are a free expression platform and have to make tough decisions every day about the balance between giving people a voice and limiting harmful content,” Meta said in a statement Wednesday. “It’s no surprise that Republicans and Democrats often disagree with our decisions — but they also disagree with each other. What we need is an updated set of rules for the Internet from Congress that businesses should follow, which is why we’ve been asking for them for nearly three years.
Meta CEO Mark Zuckerberg has suggested changes that would only give internet platforms legal protection if they can prove their systems for identifying illegal content are up to snuff. This requirement, however, could be more difficult for smaller tech companies and startups to meet, leading critics to accuse Facebook of ultimately fostering this.
Hoylman said his legislation does not violate Section 230 of the Communications Decency Act because promoting content is an affirmative act separate from mere hosting of information and therefore not contemplated by the protections of Section 230. .
“However, social media websites are no longer just a host for their users’ content,” Hoylman wrote in his legislative rationale. “Many social media companies use complex algorithms designed to put the most controversial and provocative content in front of users as much as possible. These algorithms help social media companies drive engagement with their platform, keep users hooked, and increase profits. The social media companies that use these algorithms are not a deadpan forum for the exchange of ideas; they are active participants in the conversation.