Law in the Internet Society
-- LauraBane - 20 Oct 2024

Killing (The Algorithm) In The Name (Of Liberation)

Introduction

Imagine that you are a twelve-year-old girl who has access to an Internet-capable device and is not subject to Panopticon-level supervision by her parents. Feeling the weight of peer pressure, you download social media app Instagram. Over the next several days, you will be inundated with posts which will permanently alter your self-conception and give rise to myriad new insecurities. In your pre-social media life, attractiveness was Boolean—people were either fat or thin, pretty or ugly. Now, your mind stirs with thoughts of buccal fat, thigh gaps, strawberry legs, upper lip-to-gums ratios, and 'P-' versus 'D-' shaped silhouettes. Over a decade later, you watch helplessly as the son of blood emerald dealers purchases a social media website and turns it into an alt-right hellscape whose incessant political advertising helps wannabe-dictator Donald Trump win a second term. Yet, because of the misconceptions surrounding technology-fueled socialization and the ultra-libertarian legal landscape which governs it, many view either ignore these horrifying developments or see them as an inevitable price to pay for the “convenience” of online social networking.

How the Legal Landscape Enables This Behavior

To understand this psychologically violent phenomenon and identify solutions, one must first understand the laws which have allowed it to flourish. Because social media posts are undoubtedly forms of expression, they are subject to First Amendment jurisprudence. This means that despite social media platforms’ status as private companies and their consequent right to restrict constitutionally protected speech and expression such as racial slurs or nudity, the government cannot force them to regulate undesirable, but legal, speech. And, even in the case of unprotected speech and expression, such as child sexual abuse material (“CSAM”) or imminent threats of violence, social media companies are largely insulated from liability as a result of §230(c) of the Communications Decency Act of 1996, whose protections apply as long as an Internet platform simply provides a space for others to speak and removes objectionable content in good faith. This has proved disastrous: despite harrowing testimony at a congressional hearing earlier this year that Mark Zuckerberg’s Meta apps “pushe[d] pro-eating disorder content to vulnerable populations” and gave users a lukewarm warning that certain search terms may reveal CSAM but allowed users to “[s]ee results anyway,” Zuckerberg has not been prosecuted or sufficiently held accountable.

Why the Stakes Are Higher Than Ever

Many wave away the dangers of unregulated social media by arguing that they are safe as long as they do not utilize the platforms themselves. However, as evinced by the 2024 Election, this is far from true. "Low-information" voters swung largely for Donald Trump this year, helping him secure a victory over his opponent, Kamala Harris. This class of voters is characterized by their rebuke of traditional informational sources, such as news networks. Instead, they learn about politics almost exclusively through social media, making them susceptible to misinformation campaigns. This is not unique to 2024--both the 2016 and 2020 election seasons were marred by a surge of foreign bots who flooded social media platforms with damaging lies about both candidates and the general political climate. However, this time, there was an added element: billionaire Elon Musk had purchased Twitter (now called "X") in 2022 and turned it into a safe haven for right-wing extremism, unbanning hundreds of thousands of Neo-Nazis and white supremacists and firing the site's previous class of content moderators. When Musk began funneling money into Trump's campaign, he simultaneously ramped up the site's promotion of pro-Trump content and ads, influencing the millions of voting-age Americans who use X. Even after Trump's election, Musk continues to wield massive influential power: he has successfully influenced Trump with respect to cabinet picks ranging from credibly accused pedophilic sex traffickers to Russian assets and the adoption of disastrous economic policies. To believe that Musk is doing all of this purely to combat the "woke mind virus" is woefully naive: Musk and other billionaires experienced a massive wealth increase virtually overnight following Trump's election.

Dismantling Misinformation and the Algorithm

In 2020, the Department of Justice recommended that Congress modify §230 to incentivize platforms to tackle illicit content, primarily via public shaming. This is totally inadequate. Millions of Americans loathe Musk and Zuckerberg, yet their voices are drowned out by the metaphorical sound of cash flow. For the immediate future—assuming that social media platforms will continue to be run by billionaires and attract millions of users across the country—the only possible solution is abolishing §230. Next, a hard temporal requirement should be set on all Internet platform owners to remove illegal content, and targets of misinformation campaigns should be free to sue all Internet platform owners who allow defamatory statements to be displayed on their sites. At first blush, this seems extreme. However, it is important to remember that no other Internet users have the expectation of perpetrating harm with impunity. Additionally, the potential harms posed by allowing §230 to remain in place are significantly greater now than even a decade ago, given the technology industry's clear increased influence over politics and the rise of scarily convincing deepfake technology. Additionally, there should be legislation targeting companies who allow their ads to be run on platforms which sponsor illegal or defamatory content.

Even if §230 is eliminated, there may be a longer term solution: the creation of decentralized online networks by which people can communicate with one another and edit each other’s posts far more freely than they can on current social media platforms. At first, it seems counterintuitive—didn’t I say that a lack of regulation was the problem? Yet, here, regulation exists—it just exists in the hands of many. Additionally, existing criminal laws barring the posting and sharing of illegal content online would still govern. In this scenario, there would not be an overarching power imbalance between a single owner who wants to make money by partnering with producers of consumer goods and millions of disenfranchised users, so people would be able to self-select to act as moderators without the fear of firing (as was the case with Musk's X). Under this framework, people would a conscious choice to seek out online fora for their interests and could remove content which they find disturbing. This would foster greater accountability, a more vested interest in one's Internet usage, and (hopefully) a healthier, more informed populace.

Navigation

Webs Webs

r3 - 17 Nov 2024 - 07:51:20 - LauraBane
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM