LauraBaneFirstEssay 4 - 17 Nov 2024 - Main.LauraBane
|
|
META TOPICPARENT | name="FirstEssay" |
-- LauraBane - 20 Oct 2024 | | To understand this psychologically violent phenomenon and identify solutions, one must first understand the laws which have allowed it to flourish. Because social media posts are undoubtedly forms of expression, they are subject to First Amendment jurisprudence. This means that despite social media platforms’ status as private companies and their consequent right to restrict constitutionally protected speech and expression such as racial slurs or nudity, the government cannot force them to regulate undesirable, but legal, speech. And, even in the case of unprotected speech and expression, such as child sexual abuse material (“CSAM”) or imminent threats of violence, social media companies are largely insulated from liability as a result of §230(c) of the Communications Decency Act of 1996, whose protections apply as long as an Internet platform simply provides a space for others to speak and removes objectionable content in good faith. This has proved disastrous: despite harrowing testimony at a congressional hearing earlier this year that Mark Zuckerberg’s Meta apps “pushe[d] pro-eating disorder content to vulnerable populations” and gave users a lukewarm warning that certain search terms may reveal CSAM but allowed users to “[s]ee results anyway,” Zuckerberg has not been prosecuted or sufficiently held accountable.
Why the Stakes Are Higher Than Ever | |
< < | Many wave away the dangers of unregulated social media by arguing that they are safe as long as they do not utilize the platforms themselves. However, as evinced by the 2024 Election, this is far from true. "Low-information" voters swung largely for Donald Trump this year, helping him secure a victory over his opponent, Kamala Harris. This class of voters is characterized by their rebuke of traditional informational sources, such as news networks. Instead, they learn about politics almost exclusively through social media, making them susceptible to misinformation campaigns. This is not unique to 2024--both the 2016 and 2020 election seasons were marred by a surge of foreign bots who flooded social media platforms with damaging lies about both candidates and the general political climate. However, this time, there was an added element: billionaire Elon Musk had purchased Twitter (now called "X") in 2022 and turned it into a safe haven for right-wing extremism, unbanning hundreds of thousands of Neo-Nazis and white supremacists and firing the site's previous class of content moderators. When Musk began funneling money into Trump's campaign, he simultaneously ramped up the site's promotion of pro-Trump content and ads, influencing the millions of voting-age Americans who use X. Even after Trump's election, Musk continues to wield massive influential power: he has successfully influenced Trump with respect to cabinet picks ranging from credibly accused pedophilic sex traffickers to Russian assets and the adoption of disastrous economic policies. To believe that Musk is doing all of this purely to combat the "woke mind virus" is woefully naive: Musk and other billionaires experienced a massive wealth increase virtually overnight following Trump's election. | > > | Many wave away the dangers of unregulated social media by arguing that they are safe as long as they do not utilize the platforms themselves. However, as evinced by the 2024 Election, this is far from true. "Low-information" voters swung largely for Donald Trump this year, helping him secure a victory over his opponent, Kamala Harris. This class of voters is characterized by their rebuke of traditional informational sources, such as news networks. Instead, they learn about politics almost exclusively through social media, making them susceptible to misinformation campaigns. This is not unique to 2024--both the 2016 and 2020 election seasons were marred by a surge of foreign bots who flooded social media platforms with damaging lies about both candidates and the general political climate. However, this time, there was an added element: billionaire Elon Musk had purchased Twitter (now called "X") in 2022 and turned it into a safe haven for right-wing extremism, unbanning hordes of Neo-Nazis and white supremacists and firing the site's previous class of content moderators. When Musk began funneling money into Trump's campaign, he simultaneously ramped up the site's promotion of pro-Trump content and ads, influencing the millions of voting-age Americans who use X. Even after Trump's election, Musk continues to wield massive influential power: he has successfully influenced Trump with respect to cabinet picks ranging from credibly accused pedophilic sex traffickers to Russian assets and the adoption of disastrous economic policies. To believe that Musk is doing all of this purely to combat the "woke mind virus" is woefully naive: Musk and other billionaires experienced a massive wealth increase virtually overnight following Trump's election. | | Dismantling Misinformation and the Algorithm
In 2020, the Department of Justice recommended that Congress modify §230 to incentivize platforms to tackle illicit content, primarily via public shaming. This is totally inadequate. Millions of Americans loathe Musk and Zuckerberg, yet their voices are drowned out by the metaphorical sound of cash flow. For the immediate future—assuming that social media platforms will continue to be run by billionaires and attract millions of users across the country—the only possible solution is abolishing §230. Next, a hard temporal requirement should be set on all Internet platform owners to remove illegal content, and targets of misinformation campaigns should be free to sue all Internet platform owners who allow defamatory statements to be displayed on their sites. At first blush, this seems extreme. However, it is important to remember that no other Internet users have the expectation of perpetrating harm with impunity. Additionally, the potential harms posed by allowing §230 to remain in place are significantly greater now than even a decade ago, given the technology industry's clear increased influence over politics and the rise of scarily convincing deepfake technology. Additionally, there should be legislation targeting companies who allow their ads to be run on platforms which sponsor illegal or defamatory content. |
|
LauraBaneFirstEssay 3 - 17 Nov 2024 - Main.LauraBane
|
|
META TOPICPARENT | name="FirstEssay" |
-- LauraBane - 20 Oct 2024 | |
Introduction | |
< < | Imagine that you are a twelve-year-old girl who has access to an Internet-capable device and is not subject to Panopticon-level supervision by her parents. Feeling the weight of peer pressure, you download social media app Instagram. Over the next several days, you will be inundated with posts which will permanently alter your self-conception and give rise to myriad new insecurities. In your pre-social media life, attractiveness was Boolean—people were either fat or thin, pretty or ugly. Now, your mind stirs with thoughts of buccal fat, thigh gaps, strawberry legs, upper lip-to-gums ratios, and 'P-' versus 'D-' shaped silhouettes. You stop eating, beg your parents to buy you retinol cream and let you save up for cosmetic surgery, and stare at yourself in the mirror until you no longer recognize your reflection. In any other context, this sort of disruption and anguish would be deemed an impermissible psychological operation, the stuff of conspiracy theories like MK Ultra. Yet, because of the misconceptions surrounding technology-fueled socialization and the ultra-libertarian legal landscape which governs it, many view the rise in eating disorders and suicidal ideation among children and young adults as an inevitable price to pay for the “convenience” of online social networking. | > > | Imagine that you are a twelve-year-old girl who has access to an Internet-capable device and is not subject to Panopticon-level supervision by her parents. Feeling the weight of peer pressure, you download social media app Instagram. Over the next several days, you will be inundated with posts which will permanently alter your self-conception and give rise to myriad new insecurities. In your pre-social media life, attractiveness was Boolean—people were either fat or thin, pretty or ugly. Now, your mind stirs with thoughts of buccal fat, thigh gaps, strawberry legs, upper lip-to-gums ratios, and 'P-' versus 'D-' shaped silhouettes. Over a decade later, you watch helplessly as the son of blood emerald dealers purchases a social media website and turns it into an alt-right hellscape whose incessant political advertising helps wannabe-dictator Donald Trump win a second term. Yet, because of the misconceptions surrounding technology-fueled socialization and the ultra-libertarian legal landscape which governs it, many view either ignore these horrifying developments or see them as an inevitable price to pay for the “convenience” of online social networking. | | | |
< < | The Relevant Legal Landscape
To understand this psychologically violent phenomenon and identify solutions, one must first understand the laws which have allowed it to flourish. Because social media posts are undoubtedly forms of expression,_1_ they are subject to First Amendment jurisprudence. This means that despite social media platforms’ status as private companies and their consequent right to restrict constitutionally protected speech and expression such as racial slurs or nudity, the government cannot force them to regulate undesirable, but legal, speech._2_ And, even in the case of unprotected speech and expression, such as child sexual abuse material (“CSAM”) or imminent threats of violence, social media companies are largely insulated from liability as a result of §230(c) of the Communications Decency Act of 1996._3_
§230(c)(1) prevents “interactive computer service” providers from being deemed the “publisher or speaker of any information provided by another information content provider.” Id. §230(c)(2) protects “operators of interactive computer services” who remove, in good faith, material that is “obscene, lewd, . . . excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Id. One key factor in applying §230(c) immunity to social media platforms is whether said platform engaged in the speech itself, or whether it simply provided a space for others to speak. For example, the Third Circuit recently permitted the lawsuit of a mother whose child died as a result of toxic messaging on the social media app TikTok? to proceed, stating that because TikTok? curates its algorithm before presenting it to users, it is a publisher or speaker, therefore rendering §230 inapplicable._4_
Another important element of §230(c) analysis is deciding what constitutes the “good faith” removal of objectionable content. Despite harrowing testimony at a congressional hearing earlier this year that Mark Zuckerberg’s Meta apps “pushe[d] pro-eating disorder content to vulnerable populations”_5_ and gave users a lukewarm warning that certain search terms may reveal CSAM but allowed users to “[s]ee results anyway,”_6_ Zuckerberg has not been held sufficiently accountable._7_ Zuckerberg has paid lip service to curbing his platforms’ negative effects on minors, such as requiring all Facebook users to certify that they are at least thirteen years old, but whistleblower reports have shown that children can simply lie about their age when creating an account. Id. And, although Facebook “could employ the mechanisms it uses to analyze other types of audiences on the platform” and eliminate underage users, it “just chooses not to do so.” Id. | > > | How the Legal Landscape Enables This Behavior
To understand this psychologically violent phenomenon and identify solutions, one must first understand the laws which have allowed it to flourish. Because social media posts are undoubtedly forms of expression, they are subject to First Amendment jurisprudence. This means that despite social media platforms’ status as private companies and their consequent right to restrict constitutionally protected speech and expression such as racial slurs or nudity, the government cannot force them to regulate undesirable, but legal, speech. And, even in the case of unprotected speech and expression, such as child sexual abuse material (“CSAM”) or imminent threats of violence, social media companies are largely insulated from liability as a result of §230(c) of the Communications Decency Act of 1996, whose protections apply as long as an Internet platform simply provides a space for others to speak and removes objectionable content in good faith. This has proved disastrous: despite harrowing testimony at a congressional hearing earlier this year that Mark Zuckerberg’s Meta apps “pushe[d] pro-eating disorder content to vulnerable populations” and gave users a lukewarm warning that certain search terms may reveal CSAM but allowed users to “[s]ee results anyway,” Zuckerberg has not been prosecuted or sufficiently held accountable. | | | |
> > | Why the Stakes Are Higher Than Ever
Many wave away the dangers of unregulated social media by arguing that they are safe as long as they do not utilize the platforms themselves. However, as evinced by the 2024 Election, this is far from true. "Low-information" voters swung largely for Donald Trump this year, helping him secure a victory over his opponent, Kamala Harris. This class of voters is characterized by their rebuke of traditional informational sources, such as news networks. Instead, they learn about politics almost exclusively through social media, making them susceptible to misinformation campaigns. This is not unique to 2024--both the 2016 and 2020 election seasons were marred by a surge of foreign bots who flooded social media platforms with damaging lies about both candidates and the general political climate. However, this time, there was an added element: billionaire Elon Musk had purchased Twitter (now called "X") in 2022 and turned it into a safe haven for right-wing extremism, unbanning hundreds of thousands of Neo-Nazis and white supremacists and firing the site's previous class of content moderators. When Musk began funneling money into Trump's campaign, he simultaneously ramped up the site's promotion of pro-Trump content and ads, influencing the millions of voting-age Americans who use X. Even after Trump's election, Musk continues to wield massive influential power: he has successfully influenced Trump with respect to cabinet picks ranging from credibly accused pedophilic sex traffickers to Russian assets and the adoption of disastrous economic policies. To believe that Musk is doing all of this purely to combat the "woke mind virus" is woefully naive: Musk and other billionaires experienced a massive wealth increase virtually overnight following Trump's election. | | | |
< < | Potential Solutions
In 2020, the Department of Justice issued recommendations to Congress to modify §230._8_ These recommendations included (i) “incentivizing platforms to deal with illicit content,” primarily by publicly shaming companies that “solicit illicit activity,” (ii) removing protections for sites known to “host child abuse, terrorism, and cyber-stalking,” including any site which has “been notified by courts of” the existence of such material, (iii) removing protections from civil lawsuits initiated by the federal government, and (iv) more clearly defining “good faith” in the context of the statute. Id.
For the immediate future—assuming that social media platforms will continue to be run by billionaires and attract millions of users across the country—I propose two alternatives, partially because I believe that they will effectively target speech which does not rise to the same level of legal actionability as child abuse or terrorism but is nonetheless undesirable. First, minors’ use of social media should be regulated in a manner similar to drinking—there should be a legally enforceable minimum age requirement. Second, social media platforms should be required to ask all users for photo identification in order to verify users’ identities. This will not only ensure that underage people are not using the platforms, but also disincentivize the posting or sharing of illegal material, such as CSAM.
However, I am deeply uncomfortable with social media platforms having access to information such as one’s home address or social security number. For that reason, I propose a longer term solution: the creation of decentralized online networks by which people can communicate with one another and edit each other’s posts far more freely than they can on current social media platforms. At first, it seems counterintuitive—didn’t I say that a lack of regulation was the problem? Yet, here, regulation exists—it just exists in the hands of many. Without the overarching power imbalance between a single owner who wants to make money by partnering with producers of consumer goods and millions of disenfranchised users, people would be able to self-select to act as moderators, and the perverted ‘algorithm’ would cease to exist. Instead, people would have to make a conscious choice to seek out online fora for their interests and could remove content which they find disturbing. And, because everyone would be speaking self-representatively, rather than amplifying others’ speech (as Zuckerberg does under the current social media model), §230 would be rendered obsolete, thereby facilitating increased accountability. | > > | Dismantling Misinformation and the Algorithm
In 2020, the Department of Justice recommended that Congress modify §230 to incentivize platforms to tackle illicit content, primarily via public shaming. This is totally inadequate. Millions of Americans loathe Musk and Zuckerberg, yet their voices are drowned out by the metaphorical sound of cash flow. For the immediate future—assuming that social media platforms will continue to be run by billionaires and attract millions of users across the country—the only possible solution is abolishing §230. Next, a hard temporal requirement should be set on all Internet platform owners to remove illegal content, and targets of misinformation campaigns should be free to sue all Internet platform owners who allow defamatory statements to be displayed on their sites. At first blush, this seems extreme. However, it is important to remember that no other Internet users have the expectation of perpetrating harm with impunity. Additionally, the potential harms posed by allowing §230 to remain in place are significantly greater now than even a decade ago, given the technology industry's clear increased influence over politics and the rise of scarily convincing deepfake technology. Additionally, there should be legislation targeting companies who allow their ads to be run on platforms which sponsor illegal or defamatory content. | | | |
< < |
You use far too much space summarizing legal material that can be addressed with one link. But you do not explain at all the central legal hypothesis of the draft: that regulation is necessary. So long as you have a working legislative majority in both houses sufficiently hostile to the platforms and a president unwilling to veto the legislation (conditions which might be expected to exist, I suppose, were it not for Elon Musk serving as de facto VP), why not just repeal 230 altogether? No one except platform owners thinks they deserve an immense subsidy through immunity from ordinary rules of liability. Those rules, if back in force, would pervasively alter their behavior in the desired direction, overnight. | > > | Even if §230 is eliminated, there may be a longer term solution: the creation of decentralized online networks by which people can communicate with one another and edit each other’s posts far more freely than they can on current social media platforms. At first, it seems counterintuitive—didn’t I say that a lack of regulation was the problem? Yet, here, regulation exists—it just exists in the hands of many. Additionally, existing criminal laws barring the posting and sharing of illegal content online would still govern. In this scenario, there would not be an overarching power imbalance between a single owner who wants to make money by partnering with producers of consumer goods and millions of disenfranchised users, so people would be able to self-select to act as moderators without the fear of firing (as was the case with Musk's X). Under this framework, people would a conscious choice to seek out online fora for their interests and could remove content which they find disturbing. This would foster greater accountability, a more vested interest in one's Internet usage, and (hopefully) a healthier, more informed populace. | | | |
< < | So the primary problem is that Musk helped to buy the presidency for Donald Trump to ensure that 230 immunity remains in place. Wouldn't it be useful to explain that fact, to contest it if you doubt it, and in either event to reason directly about he consequences rather than speculate counter-factually?
| | | |
< < | Sources
Why aren't these just links anchored to the relevant text? The Web allows us to maske it possible for the reader to reach the source with a click, and the wiki makes all linking easy. So why the cumbersome three-step process to do what comes naturally?
1. Elonis v. U.S., 575 U.S. 723 (2015).
2. https://www.freedomforum.org/free-speech-on-social-media/
3. https://en.wikipedia.org/wiki/Section_230
4. https://www.wsj.com/us-news/law/appeals-court-raises-questions-over-section-230-law-giving-social-media-companies-legal-immunity-af4c1e6c
5. https://www.klobuchar.senate.gov/public/index.cfm/2021/10/at-commerce-committee-hearing-with-facebook-whistleblower-klobuchar-highlights-how-facebook-algorithm-promotes-eating-disorders-among-young-users
6. https://www.judiciary.senate.gov/press/releases/judiciary-commerce-committee-leaders-want-answers-from-meta-on-pedophile-network-story
7. https://time.com/6104070/facebook-whistleblower-congressional-hearing-takeaways/
8. See footnote 3. | | \ No newline at end of file |
|
LauraBaneFirstEssay 2 - 10 Nov 2024 - Main.EbenMoglen
|
|
META TOPICPARENT | name="FirstEssay" |
-- LauraBane - 20 Oct 2024 | | For the immediate future—assuming that social media platforms will continue to be run by billionaires and attract millions of users across the country—I propose two alternatives, partially because I believe that they will effectively target speech which does not rise to the same level of legal actionability as child abuse or terrorism but is nonetheless undesirable. First, minors’ use of social media should be regulated in a manner similar to drinking—there should be a legally enforceable minimum age requirement. Second, social media platforms should be required to ask all users for photo identification in order to verify users’ identities. This will not only ensure that underage people are not using the platforms, but also disincentivize the posting or sharing of illegal material, such as CSAM.
However, I am deeply uncomfortable with social media platforms having access to information such as one’s home address or social security number. For that reason, I propose a longer term solution: the creation of decentralized online networks by which people can communicate with one another and edit each other’s posts far more freely than they can on current social media platforms. At first, it seems counterintuitive—didn’t I say that a lack of regulation was the problem? Yet, here, regulation exists—it just exists in the hands of many. Without the overarching power imbalance between a single owner who wants to make money by partnering with producers of consumer goods and millions of disenfranchised users, people would be able to self-select to act as moderators, and the perverted ‘algorithm’ would cease to exist. Instead, people would have to make a conscious choice to seek out online fora for their interests and could remove content which they find disturbing. And, because everyone would be speaking self-representatively, rather than amplifying others’ speech (as Zuckerberg does under the current social media model), §230 would be rendered obsolete, thereby facilitating increased accountability. | |
> > |
You use far too much space summarizing legal material that can be addressed with one link. But you do not explain at all the central legal hypothesis of the draft: that regulation is necessary. So long as you have a working legislative majority in both houses sufficiently hostile to the platforms and a president unwilling to veto the legislation (conditions which might be expected to exist, I suppose, were it not for Elon Musk serving as de facto VP), why not just repeal 230 altogether? No one except platform owners thinks they deserve an immense subsidy through immunity from ordinary rules of liability. Those rules, if back in force, would pervasively alter their behavior in the desired direction, overnight.
So the primary problem is that Musk helped to buy the presidency for Donald Trump to ensure that 230 immunity remains in place. Wouldn't it be useful to explain that fact, to contest it if you doubt it, and in either event to reason directly about he consequences rather than speculate counter-factually?
| | Sources | |
> > |
Why aren't these just links anchored to the relevant text? The Web allows us to maske it possible for the reader to reach the source with a click, and the wiki makes all linking easy. So why the cumbersome three-step process to do what comes naturally?
| | 1. Elonis v. U.S., 575 U.S. 723 (2015).
2. https://www.freedomforum.org/free-speech-on-social-media/
3. https://en.wikipedia.org/wiki/Section_230 |
|
LauraBaneFirstEssay 1 - 20 Oct 2024 - Main.LauraBane
|
|
> > |
META TOPICPARENT | name="FirstEssay" |
-- LauraBane - 20 Oct 2024
Killing (The Algorithm) In The Name (Of Liberation)
Introduction
Imagine that you are a twelve-year-old girl who has access to an Internet-capable device and is not subject to Panopticon-level supervision by her parents. Feeling the weight of peer pressure, you download social media app Instagram. Over the next several days, you will be inundated with posts which will permanently alter your self-conception and give rise to myriad new insecurities. In your pre-social media life, attractiveness was Boolean—people were either fat or thin, pretty or ugly. Now, your mind stirs with thoughts of buccal fat, thigh gaps, strawberry legs, upper lip-to-gums ratios, and 'P-' versus 'D-' shaped silhouettes. You stop eating, beg your parents to buy you retinol cream and let you save up for cosmetic surgery, and stare at yourself in the mirror until you no longer recognize your reflection. In any other context, this sort of disruption and anguish would be deemed an impermissible psychological operation, the stuff of conspiracy theories like MK Ultra. Yet, because of the misconceptions surrounding technology-fueled socialization and the ultra-libertarian legal landscape which governs it, many view the rise in eating disorders and suicidal ideation among children and young adults as an inevitable price to pay for the “convenience” of online social networking.
The Relevant Legal Landscape
To understand this psychologically violent phenomenon and identify solutions, one must first understand the laws which have allowed it to flourish. Because social media posts are undoubtedly forms of expression,_1_ they are subject to First Amendment jurisprudence. This means that despite social media platforms’ status as private companies and their consequent right to restrict constitutionally protected speech and expression such as racial slurs or nudity, the government cannot force them to regulate undesirable, but legal, speech._2_ And, even in the case of unprotected speech and expression, such as child sexual abuse material (“CSAM”) or imminent threats of violence, social media companies are largely insulated from liability as a result of §230(c) of the Communications Decency Act of 1996._3_
§230(c)(1) prevents “interactive computer service” providers from being deemed the “publisher or speaker of any information provided by another information content provider.” Id. §230(c)(2) protects “operators of interactive computer services” who remove, in good faith, material that is “obscene, lewd, . . . excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Id. One key factor in applying §230(c) immunity to social media platforms is whether said platform engaged in the speech itself, or whether it simply provided a space for others to speak. For example, the Third Circuit recently permitted the lawsuit of a mother whose child died as a result of toxic messaging on the social media app TikTok? to proceed, stating that because TikTok? curates its algorithm before presenting it to users, it is a publisher or speaker, therefore rendering §230 inapplicable._4_
Another important element of §230(c) analysis is deciding what constitutes the “good faith” removal of objectionable content. Despite harrowing testimony at a congressional hearing earlier this year that Mark Zuckerberg’s Meta apps “pushe[d] pro-eating disorder content to vulnerable populations”_5_ and gave users a lukewarm warning that certain search terms may reveal CSAM but allowed users to “[s]ee results anyway,”_6_ Zuckerberg has not been held sufficiently accountable._7_ Zuckerberg has paid lip service to curbing his platforms’ negative effects on minors, such as requiring all Facebook users to certify that they are at least thirteen years old, but whistleblower reports have shown that children can simply lie about their age when creating an account. Id. And, although Facebook “could employ the mechanisms it uses to analyze other types of audiences on the platform” and eliminate underage users, it “just chooses not to do so.” Id.
Potential Solutions
In 2020, the Department of Justice issued recommendations to Congress to modify §230._8_ These recommendations included (i) “incentivizing platforms to deal with illicit content,” primarily by publicly shaming companies that “solicit illicit activity,” (ii) removing protections for sites known to “host child abuse, terrorism, and cyber-stalking,” including any site which has “been notified by courts of” the existence of such material, (iii) removing protections from civil lawsuits initiated by the federal government, and (iv) more clearly defining “good faith” in the context of the statute. Id.
For the immediate future—assuming that social media platforms will continue to be run by billionaires and attract millions of users across the country—I propose two alternatives, partially because I believe that they will effectively target speech which does not rise to the same level of legal actionability as child abuse or terrorism but is nonetheless undesirable. First, minors’ use of social media should be regulated in a manner similar to drinking—there should be a legally enforceable minimum age requirement. Second, social media platforms should be required to ask all users for photo identification in order to verify users’ identities. This will not only ensure that underage people are not using the platforms, but also disincentivize the posting or sharing of illegal material, such as CSAM.
However, I am deeply uncomfortable with social media platforms having access to information such as one’s home address or social security number. For that reason, I propose a longer term solution: the creation of decentralized online networks by which people can communicate with one another and edit each other’s posts far more freely than they can on current social media platforms. At first, it seems counterintuitive—didn’t I say that a lack of regulation was the problem? Yet, here, regulation exists—it just exists in the hands of many. Without the overarching power imbalance between a single owner who wants to make money by partnering with producers of consumer goods and millions of disenfranchised users, people would be able to self-select to act as moderators, and the perverted ‘algorithm’ would cease to exist. Instead, people would have to make a conscious choice to seek out online fora for their interests and could remove content which they find disturbing. And, because everyone would be speaking self-representatively, rather than amplifying others’ speech (as Zuckerberg does under the current social media model), §230 would be rendered obsolete, thereby facilitating increased accountability.
Sources
1. Elonis v. U.S., 575 U.S. 723 (2015).
2. https://www.freedomforum.org/free-speech-on-social-media/
3. https://en.wikipedia.org/wiki/Section_230
4. https://www.wsj.com/us-news/law/appeals-court-raises-questions-over-section-230-law-giving-social-media-companies-legal-immunity-af4c1e6c
5. https://www.klobuchar.senate.gov/public/index.cfm/2021/10/at-commerce-committee-hearing-with-facebook-whistleblower-klobuchar-highlights-how-facebook-algorithm-promotes-eating-disorders-among-young-users
6. https://www.judiciary.senate.gov/press/releases/judiciary-commerce-committee-leaders-want-answers-from-meta-on-pedophile-network-story
7. https://time.com/6104070/facebook-whistleblower-congressional-hearing-takeaways/
8. See footnote 3. |
|
|
|
This site is powered by the TWiki collaboration platform. All material on this collaboration platform is the property of the contributing authors. All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
|
|