|
META TOPICPARENT | name="FirstEssay" |
How Social Media Exacerbates Income Inequality | |
< < | -- By AmayGupta - 11 Oct 2019 | > > | -- By AmayGupta - 10 Jan 2020
I don’t think it is a stretch to say that our exposure to the lives of the rich and famous on Instagram impacts the way we act as consumers and impacts the attitudes we have about ourselves and others. Despite the excess consumption Instagram-loving millennials are used to seeing in places like California, “richer” states can have some of the highest rates of poverty. Beyond the psychological effects of social media that coerce vain people like me to buy more things than I need and jeopardize my own finances, one question I have is: does our use of social media worsen socioeconomic inequality and strengthen institutional racism? I believe that the answer is a resounding yes. | | | |
< < | Growing up in Los Angeles, I have met my fair share of people who, when asked what career they wish to pursue, respond with “influencer.” The barrage of images of Mercedes-Benz G-wagons, Saint Laurent sweatshirts, and celebrity culture that permeates Instagram can easily make us feel like we have to keep up with the Joneses. I don’t think it is a stretch to say that our exposure to the lives of the rich and famous on Instagram impacts the way we act as consumers and impacts the attitudes we have about ourselves and others. After all, I’ve witnessed my friends that make minimum wage as actors in LA drop $700 on Gucci Ace Sneakers to get into clubs so they can scroll through Instagram at the club instead of at home. Yet, according to the US Census Bureau, California has one of the highest rates of poverty. LA’s emphasis on conspicuous consumption makes it seem that those who have it all are hard workers, whereas those who wear the same clothes for over a year are lazy because they are poor. But beyond the psychological effects of social media that coerce vain people like me to buy more things than I need and jeopardize my own finances, one question I have is: do internet advertisers (referencing a majority of millennials and Gen-Zers) worsen socioeconomic inequality by either targeting poorer individuals? I believe that the answer is a resounding yes. | | How Our Data Can Lead to Worse Outcomes for Marginalized Groups | | In addition, it will become more difficult to determine if one is being discriminated against in loan applications. Moving towards models that estimate creditworthiness based on internet activity poses legal challenges when it comes to fighting discrimination cases. Credit-scoring tools that use thousands of data points collected without consumer knowledge may provide “objective” scores but obscure discriminatory and subjective lending policies. An article from the Yale Journal of Science and Technology discusses how ZestFinance? , a prominent player in the alternative credit-scoring industry, takes into account how quickly a loan applicant scrolls through the online terms & conditions to help determine how responsible an individual is. In addition, spending habits in the context of a borrower’s geographic location would also be used to indicate conventional spending. Based on current laws, proving violations under the ECOA, which protects against discrimination in credit transactions, requires plaintiffs to demonstrate disparate treatment by showing either that the lender had a discriminatory intent or motive or the decisions had a disproportionately adverse impact on minorities. Because new credit-scoring tools used for housing integrate thousands of data points, these technologies make it incredibly difficult for plaintiffs to make prima facie cases of disparate impact. | |
< < | In job recruiting, where a variety of companies across all industries use algorithmic sites like Taleo, to evaluate potential applicants, I now worry that these sites also take data such as what we wear, our social behaviors, and who we are friends with to portray us as poor candidates for high income jobs despite our academic achievements, extracurricular involvement, and other markers of success. In higher education, some universities are already using how many friends and photos applicants have on their portals to help determine whether or not to offer them admission. The adage about not judging a book by its cover may be ingrained in the admissions staff, but the algorithms used to “predict” which students will enroll and stay enrolled has no emotion.
Essentially, there are no guarantees that algorithms that utilize our data will not reproduce existing patterns of discrimination or reflect biases that are prevalent in society. What bothers me even more is that low-income consumers may never even know that they were subject to this type of insidious discrimination nor will most of them have the legal resources to pursue a cause of action. Current trends towards arbitration certainly don’t help, and because damages for these violations tend to be low, our generation of lawyers has work to do beyond pushing for laws that inform customers that their data is being surveilled. | > > | There are no guarantees that algorithms that utilize our data will not reproduce existing patterns of discrimination or reflect biases that are prevalent in society. What bothers me even more is that low-income consumers may never even know that they were subject to this type of insidious discrimination nor will most of them have the legal resources to pursue a cause of action. Current trends towards arbitration certainly don’t help and damages for these violations tend to be low. | | Remedies – So Where Do We Begin? | |
< < | Morality must be part of the solution. Placing the burden of avoiding discrimination resulting from data driven algorithmic tools on individuals is wrong and ignores the fact that these data mining companies are in the best position to avoid discrimination. To me, if a court rules that an individual has no recourse against a company that used internet usage data to prevent them from getting a loan because they failed to protect their privacy, it is the same thing as the police telling a woman that she has no recourse against her rapist because of what she was wearing. Placing the burden on individuals to protect their privacy alone would create a world in which socioeconomic inequality is seen as a choice, not as being taken systemically taken advantage of by data brokers. | | | |
< < |
The present draft takes too long to set up the problem, leaving itself one paragraph for the rather significant question of what to do about it. Morality, it then says, is "part of the solution," though in the highly material context of the preceding analysis that comes both as a surprise and a disappointment, like recommending Christianity as the cure for poverty. | > > | While I cannot posit a one size fits all solution for various patterns of discrimination, I believe that the main issue when it comes to racial discrimination in the credit industry is a lack of racial data that plaintiffs could rely on to prove disparate treatment or impact. The lack of collection of racial information has an apparent purpose. Because ECOA bans lenders from considering the race or ethnicity of applicants, lenders hesitate to collect this information from credit applications, opting to use proxy variables instead. While it may seem strange to argue that my answer to helping those afflicted by excess data collection is to collect more data, I believe that comparative race data in lending discrimination cases would allow plaintiffs to meet their evidentiary burdens more easily. Even if consumers had access to this data, I believe that the burden of showing a lack of race discrimination should be on the developers of credit-scoring tools. Some proposals (like the Model FaTSCA? here) to eliminate racial disparities advocate for disclosures that would allow consumers to gain more insight into which metrics they are scored on. The social value of enabling a fair credit system should and does outweigh potential claims by developers that disclosures on metrics could be used to replicate software products. | | | |
< < | Making the next draft stronger seems to me easy precisely because this is a fine start. You can compress the statement of the problem, keeping the useful set of sources on which you relied. You should be able to use at least half the next draft to explain what forms of social and legal policy would help. | > > | Furthermore, solutions to discrimination in the credit industry should be tied to remedying the harms on a group basis. While individuals can contest denials of credit, the amalgamation of data used to discriminate against communities of color supports a strong inference of structural racism. Plaintiffs may not be able to claim damages due to the running of statutes of limitation, causation issues, and legal costs. The current credit system has probably instilled a sense of complacency in minority communities with denials of credit and contributed to the view of minorities as burdens on our economic system rather than victims of it. Group remedies could include requiring credit issuers to make it easier for consumers to correct misinformation in credit applications as well as requiring issuers to make significant investments in communities where consumers have been affected. While I recognize that these are broad statements devoid of specifics, I do not believe the credit system can be fixed solely by letting individuals litigate abusive lending patterns. Hopefully, requiring issuers to make intensive financial investments in areas where they have discriminated will serve as a deterrent to further discrimination and less reliance on systems that utilize data other than those directly tied to creditworthiness to make individual determinations. | | | |
< < | | |
---- |
|