Computers, Privacy & the Constitution

View   r1
ShotaSugiuraFirstPaper 1 - 11 Mar 2022 - Main.ShotaSugiura
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="FirstPaper"

Issues on Facial Recognition Technologies and Regulations Needed on Profiling

-- By ShotaSugiura - 11 Mar 2022

Section I Introduction

Facial recognition technologies have been a topical issue on privacy these days. In 2020, IBM announced canceling of their facial recognition program. In their statement, IBM’s CEO, Arvind Krishna, expressed their deep concerns on “mass surveillance, racial profiling, violations of basic human rights and freedoms” [1]. In the same month as this statement, Amazon and Microsoft also announced that they were not selling their facial recognition products to the police[2][3]. Big tech giants voluntarily withdrew from their facial recognition business, although the facial recognition technologies were thought to be emerging businesses which to serve in various places and services. This issue intrigues me because the withdrawal of giant techs from facial recognition implies the necessity of regulation in this area. This short essay discusses the problems of facial recognition technologies and whether regulation is necessary for the US. [1] https://www.cnn.com/2020/06/09/tech/ibm-facial-recognition-blm/index.html [2] https://www.cnn.com/2020/06/10/tech/amazon-facial-recognition-moratorium/index.html [3] https://www.washingtonpost.com/technology/2020/06/11/microsoft-facial-recognition/

Section II Background of Withdrawal from Facial Recognition Technologies

One of the articles reported that Amazon decided to ban the police from using Amazon’s facial recognition system “Rekognition” at least for a year. At the time, the risks that come from the police using facial recognition systems have been widely realized. For example, an innocent man was arrested by Detroit Police Department due to misidentification of the facial recognition system in January of 2020[4]. Even before this event, research institutions and citizen groups warned about the inaccuracy of facial recognition systems and the danger of mass surveillance. [4]https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html

Section III Problems on Facial Recognition

Subsection A (Problem 1) Inaccuracy of Facial Recognition Technologies

What are the problems of facial recognition technologies? One apparent problem is inaccuracy. In 2018, a researcher at the MIT Media Lab published a study showing that facial recognition technologies make more errors when identifying darker-skinned women than it does for lighter-skinned men[5]. According to the research, since facial recognition is a technology that identifies a person based on machine learning of the pattern of human faces, if the system has fewer opportunities to learn from pictures of darker-skinned women, its accuracy at identifying them would also be low. Inaccurate analysis via facial recognition system can lead to mistaken arrest, just like the Detroit Police Department did in 2020. Other than law enforcement, it can also cause serious problems in airports, political conferences, or even hospitals. [5] https://www.media.mit.edu/articles/facial-recognition-software-is-biased-towards-white-men-researcher-finds/

Subsection B (Problem 2) Profiling

Another issue would be the profiling of people based on their gender, race, etc., through the use of facial recognition technologies. Profiling is typically defined as the automated processing of data to evaluate certain personal aspects of a person, in particular, to analyze and predict a person’s preferences, interests, economic situation, reliability, or movements. Since facial recognition identifies a person only by their appearance, it can easily lead to biased decisions and inappropriate discrimination based on the person’s appearance, such as their gender or skin color. Take for example, a smart security camera. It can identify suspicious behavior of people in a scene. There is a possibility that the camera has a tendency to pick a person of certain traits, such as a specific gender or race, based on the machine-learned patterns of scenes of criminal behavior. A kind of bias or discrimination could be justified by artificial intelligence in this situation.

Section IV Is regulation needed?

Among the two major problems of facial recognition technologies, the second one (profiling) is much more serious. Currently, it seems that the inaccuracy rather draws people’s attention because its harm is apparent, as we can see in the Detroit Police Department case. However, this would not be a crucial problem if facial recognition was used long-term. Any new, relatively undeveloped technology makes mistakes. DNA testing led to many false arrests and even false judgments in the 20th century, but now the standards of technology have been dramatically improved, and DNA testing has become a powerful tool in criminal proceedings. The accuracy of new technology can be improved, and its mistakes can be fixed, and this would be true for facial recognition systems as well. The problem of accuracy might just be a matter of its current transition period. However, profiling is not. It is not a matter of immatureness of the technology. This definitely needs regulation so that the technologies are used appropriately. As an example of regulation on profiling, we can see the EU's General Data Protection Regulation (“GDPR”). Under GDPR, any activity of collection, use, and disclosure of personal data need to meet certain requirements, such as (i) obtaining consent from data subjects (Article 6), (ii) business operators are obliged to describe and notify people how personal data is processed (Articles 13 and 14), and (iii) the data subjects have the right not to be subject to a decision based solely on automated processing, including profiling (Article ss). Enacted in 2016, GDPR already provides several protection measures against the danger of profiling. Currently, the US does not have comprehensive data protection laws at a federal level. Personal information is regulated only by sector (Health Insurance Portability and Accountability Act or “HIPPA” for medical data, Children’s Online Privacy Protection Act or “COPPA” for children’s data, for example). Some states such as California, Virginia, and Colorado, have established data protection regulations at a state level. Among them, California Privacy Rights Act (CPRA) is oustanding because it provides consumers opt-out rights against profiling. However, the regulation on facial recognition technologies must be enforced nationwide. Otherwise, people live in a less regulated states in the same country would suffer disadvantages because of the different standard of protection.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 1r1 - 11 Mar 2022 - 21:34:17 - ShotaSugiura
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM