JasmineBoviaFirstEssay 4 - 21 Jan 2024 - Main.EbenMoglen
|
|
META TOPICPARENT | name="FirstEssay" |
The Problem with RAIs | | Here, it is clear that the Court has fallen for the software’s “illusion of precision”; the Court in this instance signals that it is willing to allow discrimination against private citizens in the interest of bolstering the usage of what it believes to be a more accurate, precise decisionmaker. Thus, the Wisconsin Supreme Court errs in precisely the way that Zuboff predicts. Avoidable discrimination against people of certain genders, races, and socioeconomic statuses are and will continue to be enforced so long as there remains a fundamental misunderstanding of what AI is, what it entails, and how exactly decision support software like RAIs differ from artificial intelligence.
| |
> > |
I think a better draft would be more than 640 words long. Almost a quarter of the draft is spent laying out the procedural history of a case you want to discuss, but whose specific holding you compress into half a sentence, and whose reasoning, of which you are dismissive, you do not analyze or quote. Zuboff, too, is mentioned, but not actually cited, quoted or discussed. Any statistical argument about the real world can be harmed by excess precision, and "computer says no" reasoning has been the hallmark of bureaucracy (with or without computers) since Mesopotamia, at least. And to say that anyone in particular "predicts" that the incidence of power will tend to disfavor those who are already disfavored (Xs in a world of Ys, whatever X and Y may be) is rather like awarding credit for predicting sunrise.
Clearer argument would therefore help, too. Can we create systems for assessing risk of future criminal behavior, or is such an effort inherently impossible, such that even if the law requires our public agencies to do so they should refuse to make such systems? If they are possible, among the flaws from which they can be expected to suffer are inaccuracy and bias. Both inaccuracy (for example, forecasting 50% chance of rain 50% of the time when it only rains 10% of the time in fact), and bias (forecasting rain twice as often on Mondays as on Fridays) can exist without making weather forecasts a bad idea. We can also suppose that there are situations in which measures to improve accuracy will also increase bias, or in more general terms, that improvement doesn't always mean achievement of the optimal. So some principles should emerge from the inquiry. If "due process" and "equal protection" are the labels we apply to those principles, what should be in each jar?
| | \ No newline at end of file |
|
JasmineBoviaFirstEssay 3 - 20 Jan 2024 - Main.JasmineBovia
|
|
META TOPICPARENT | name="FirstEssay" |
| |
< < | PSRs, RAIs, and the Fight Against AI | > > | The Problem with RAIs | | | |
< < | Introduction: | | | |
< < | Although Artificial Intelligence models have existed in some form since the 1950s, 2022 marked the beginning of what has now become known as the “AI Boom”, a term used to describe the rapid expansion of Artificial Intelligence usage into the mainstream. This technological boom, spurred by the use of large-language models like ChatGPT? ? and Meta Platforms, has become increasingly observable not only in the public sphere, but in a number of professional fields such as journalism, medicine, and, notably, law. This paper seeks to examine the potentially negative consequences of AI usage on the legal sector, specifically the judiciary. Further, it suggests some preliminary measures to limit, if not completely curb, the role AI plays in judgment. | > > | Increasingly implemented but continually misunderstood, Risk Assessment instruments, or RAIs, have become a common presence in the criminal legal process, with judges across all fifty states implementing RAIs in decision-making processes that determine bail amounts, flight risk, or even sentence lengths for criminal defendants. A major issue facing the implementation of this software, however, is what appears to be a fundamental misunderstanding by Courts and Legislators alike of how exactly these algorithmic tools work. Alas, from legislation to Court decisions on the topic, there appears to be a continual misconception that RAIs act as an infallible legal arbiter capable of making more precise decisions based on the data at its disposal. In actuality, the evidence is more troubling; last August, the Journal of Criminal Justice published a study on the validation techniques used by private companies to measure both the accuracy and risk of bias of nine of the most popular RAIs used countrywide. Overall, the study determined that, through evaluating numerous efficacy measurements reported for each of the nine tools, the “extent and quality of the evidence in support of the tools” was typically “poor”. | | | |
< < | AI and the Judiciary: | > > | As Shoshana Zuboff argues in her book The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, this illusion of accuracy created by decision support software causes avoidable public harm, and that harm is unequally distributed across class and race. This misunderstanding of RAIs is a fundamental example of the argument set forth by Zuboff, and reiterated here. Take for example the Courts’ treatment of legal challenges against RAIs. Although fairly new, Risk Assessment Instruments have already faced accusations of lack of transparency, discrimination, and potentially infringing practices. Potentially most notable is State v. Loomis, a 2016 discrimination and Due Process challenge against the technology before the Wisconsin Supreme Court. | | | |
< < | While the usage of Artificial Intelligence within the entire legal sphere has been met with rightful controversy, AI’s effect on the judiciary is especially troubling. According to the American Bar Association, numerous states have begun incorporating AI models into the judicial practice as an evaluation tool meant to aid in the generation of Pre-Sentence Reports (PSRs). Risk Assessment Tools are one specific class of AI model that rely on fact patterns and outcomes of previous cases to calculate metrics such as recidivism potential for criminal defendants. These metrics play an increasingly instrumental role in PSRs and, consequently, the sentencing outcomes of criminal cases. Sentencing courts have become increasingly reliant on these AI models to disastrous effect; already, the use of this software in PSR generation has been the subject of legal challenges on Due Process grounds. An investigative article published by ProPublica? ? highlighted one of the glaring issues with state judiciaries’ use of AI tools in criminal cases. Although limited data currently exists on these AI models, studies are beginning to show that risk assessment tools perpetuate racial bias in their assessments. The risk recidivism software COMPAS, developed by the for-profit company Equivant, serves as a shining example; Black defendants were almost twice as likely as white defendants to be wrongfully labeled as having a “high-risk” of recidivism. On the flipside, white defendants were much more likely than Black defendants to be incorrectly considered at “low-risk” of reoffense. This is far from the only problem with Artificial Intelligence models like COMPAS. Another potential issue with sentencing courts’ use of these tools is one inherent to their very nature. Artificial intelligence learns by constantly adapting its output to expanding data sets. These ever-evolving algorithms could mean skewed results for defendants as more data becomes available; the machine’s determination of a fair sentence for a defendant one day can, in theory, be completely different from its determination of a fair sentence for a future defendant with an identical fact pattern. Even further, the American Bar Association correctly posits that the use of computer-generated evaluations for determining matters such as recidivism risk removes the necessary human aspect of sentencing. Where human judges are better able to see beyond fact patterns and take more nuanced views of the defendants in front of them, AI software can only see the numbers, resulting in distressingly clinical results. With these problems in mind, it is understandable to see why the use of AI tools within the judiciary remains controversial. | > > | The case arose from the early 2013 arrest of Wisconsin resident Eric Loomis who, upon pleading guilty to two charges in connection with a drive-by murder, was sentenced to six years in prison based largely on the findings of COMPAS, a risk assessment tool used to evaluate Loomis upon his arrest. In response to the use of the software in his sentencing, Loomis mounted a motion for post-conviction relief on the ground that, given that the source code behind COMPAS’ risk assessment is a trade secret, and is therefore unknowable to the defendant, the defendant was unable to properly challenge the evidence against him– a violation of his due process rights to be sentenced based on accurate information, to know the evidence against him, and to receive a personalized sentence. Loomis additionally argued that the trial court violated his due process rights by allowing the RAI to consider gender in its methodology, thus introducing an impermissible consideration of gender into the Court’s sentencing. While the post-conviction relief motion was denied by the trial court, the Wisconsin Court of Appeals certified the case to the Wisconsin Supreme Court.
The Wisconsin Supreme Court ultimately affirmed the lower court’s decision, shooting down Loomis’ arguments on several grounds. Most important here, though, was the Court’s argument that the use of gender in the algorithm’s methodology served a nondiscriminatory purpose, specifically accuracy, in its inclusion. | | | |
< < | Preliminary Measures: | > > | Here, it is clear that the Court has fallen for the software’s “illusion of precision”; the Court in this instance signals that it is willing to allow discrimination against private citizens in the interest of bolstering the usage of what it believes to be a more accurate, precise decisionmaker. Thus, the Wisconsin Supreme Court errs in precisely the way that Zuboff predicts. Avoidable discrimination against people of certain genders, races, and socioeconomic statuses are and will continue to be enforced so long as there remains a fundamental misunderstanding of what AI is, what it entails, and how exactly decision support software like RAIs differ from artificial intelligence. | | | |
< < | Barring an absolute moratorium on the use of AI tools in the judiciary, which would be difficult to enforce in practice, there are mitigating measures that may be taken to minimize the negative impacts of risk assessment instruments (RAIs) on the sentencing process. For one, regulation could look like limiting what factors go into determining matters like risk recidivism in defendants. Currently, tools like COMPAS utilize information relating to a defendant’s identity when calculating risk factors– including their race, sex, and age. To avoid integrating the same biases that plague the current sentencing process into the RAI algorithms, developers should be explicitly required to exclude these demographics. Further, developing companies of RAIs should be required to publicize what considerations go into their pre-sentencing reports and risk assessments. The confidential nature of RAIs has already been the subject of legal challenge; in Loomis v. Wisconsin, a defendant raised arguments against the COMPAS software for, inter alia, not reporting what data went into the generation of his risk assessment, making it impossible to challenge the instrument’s accuracy and validity. His point was entirely valid; if pre-sentencing reports are to be made accessible to parties of a case, why should other investigative tools, like the risk assessment algorithms that help generate such reports, not be made available and open to scrutiny and potential challenge on due process grounds? Lastly, software developers should be required to analyze the algorithmic outputs of the software that they create, and publish both their process and results. In order for there to be greater transparency and scrutiny in the judiciary’s use of AI, all stakeholders need to hold equal responsibility, and accountability, for potential failings and shortcomings of the risk assessment tools. Allowing developers to gain financially from the use of their algorithms in the sentencing process without any actual stake in the outcomes will work to disincentivize them from ensuring that their models are accurate, reliable, and nondiscriminatory. While the ultimate responsibility of case outcomes should lie with the government, any party that has a stake in criminal cases should bear at least some accountability for the execution, or lack thereof, of justice. These solutions are only launching points for a longer conversation around the use of AI in the criminal justice system. There remains a larger discussion about the use of AI by police, as well as the privacy considerations that plague the integration of artificial intelligence in government as a whole. These preliminary regulations would, however, work to address the issue of AI in the judiciary pending more substantive changes. With the acceleration of the AI boom, the unregulated usage of these so-called “risk assessment tools” will only become more of a risk in-and-of-itself.
Calling everything done by statistical analysis software "AI" is surely a serious form of category error. We will then have gone from AI meaning "general artificial intelligence,"—which we don't have anywhere and which we don't have any precursor technologies for building, either—to all forms of "decision support software," which indeed by that definition I have been writing since I was fourteen.
The difference between "decision support software" and "decision-making software" is not inherent in the software. That's not an abstruse point: we all understand that human systems can rely on mechanical, mathematical or stochastic tools to make decisions. Ordeals and oracle bones are not "AI." They are, however, equally superstitious invocations of imagined divinity to perform a task which, intensely human as law-doing is, requires the people, the time and the resources necessary to its skilled accomplishment.
None of this is, as I say, difficult to put into a few hundred simple words. That both allows us to skip the "how do we regulate AI" conversation currently convulsing the class of people who believe their fundamental ignorance is a public resource, and concentrates our attention on the question we are really asking: is an illusion of precision created by reliance on "decision support" math causing avoidable social harm? Both with respect to sentencing policy and to the flawed statistical algorithms depriving public school teachers of "merit pay" under their contracts, we can probably identify dozens or hundreds of examples, each involving deprivations of life, liberty and property for large numbers of individuals and households caused by flawed automation of social, judicial and medical decision-making.
These are failures of a form of civil engineering involving our understanding (or misunderstanding) of software as a building material in the infrastructure of digital society. The arguments Zuboff makes in "The Age of Surveillance Capitalism, which I assigned but which you don't discuss, shed some important light on why failures of this sort will increase in frequency and severity in the near future, and why those harms are unequally class-distributed. It helps in thinking about remedies to have a fully-scaled understanding of the problem.
Surveillance capitalism relies on decision support for decision-making because it reduces labor's share of the resulting product at a cost either borne by labor directly or externalized to the benefit of capital. Prioritizing social outcomes over a particular division of profits results in changes in the resource management (privacy maintenance over strip-mining of human consciousness, for example) and decision support strategies of government.
When I brought one PC to Chambers in the SDNY for the first time in 1985 (and then a little tablet-sized keyboard for typing notes and saving text into the very courtroom itself) I greatly increased the productivity of two extremely hard-working law clerks. But you can be quite sure that the Judge saw to it that no machine would ever be making decisions in his court. That was why he also famously refused to read—and ostentatiously returned resealed to the Federal Probation Service in the original envelope—any presentence report that contained a numerical recommendation of any kind. You are, in that sense, pursuing an inquiry into culture rather than technology.
Sources:
Why aren't these links, anchored to the places in the text where the reader would want to reach them in the usual way, with a click, rather than being sequestered here? We are writing for the Web, where hypertext is supposed to make things easy for readers. Let's use it.
Hillman, Noel L. “The Use of Artificial Intelligence in Gauging the Risk of Recidivism.” American Bar Association, 1 Jan. 2019, www.americanbar.org/groups/judicial/publications/judges_journal/2019/winter/the-use-artificial-intelligence-gauging-risk-recidivism
Garrett, Brandon, and John Monahan. “Assessing Risk: The Use of Risk Assessment in Sentencing .” Bolch Judicial Institute at Duke Law, vol. 103, no. 2, 2019.
State v. Loomis, 371 Wis. 2d 235 (2016)
Angwin, Julia, et al. “Machine Bias.” ProPublica? ? , 23 May 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
18 USC Sec. 3552(d) | | \ No newline at end of file |
|
JasmineBoviaFirstEssay 2 - 12 Nov 2023 - Main.EbenMoglen
|
|
META TOPICPARENT | name="FirstEssay" |
PSRs, RAIs, and the Fight Against AI | | Barring an absolute moratorium on the use of AI tools in the judiciary, which would be difficult to enforce in practice, there are mitigating measures that may be taken to minimize the negative impacts of risk assessment instruments (RAIs) on the sentencing process. For one, regulation could look like limiting what factors go into determining matters like risk recidivism in defendants. Currently, tools like COMPAS utilize information relating to a defendant’s identity when calculating risk factors– including their race, sex, and age. To avoid integrating the same biases that plague the current sentencing process into the RAI algorithms, developers should be explicitly required to exclude these demographics. Further, developing companies of RAIs should be required to publicize what considerations go into their pre-sentencing reports and risk assessments. The confidential nature of RAIs has already been the subject of legal challenge; in Loomis v. Wisconsin, a defendant raised arguments against the COMPAS software for, inter alia, not reporting what data went into the generation of his risk assessment, making it impossible to challenge the instrument’s accuracy and validity. His point was entirely valid; if pre-sentencing reports are to be made accessible to parties of a case, why should other investigative tools, like the risk assessment algorithms that help generate such reports, not be made available and open to scrutiny and potential challenge on due process grounds? Lastly, software developers should be required to analyze the algorithmic outputs of the software that they create, and publish both their process and results. In order for there to be greater transparency and scrutiny in the judiciary’s use of AI, all stakeholders need to hold equal responsibility, and accountability, for potential failings and shortcomings of the risk assessment tools. Allowing developers to gain financially from the use of their algorithms in the sentencing process without any actual stake in the outcomes will work to disincentivize them from ensuring that their models are accurate, reliable, and nondiscriminatory. While the ultimate responsibility of case outcomes should lie with the government, any party that has a stake in criminal cases should bear at least some accountability for the execution, or lack thereof, of justice. These solutions are only launching points for a longer conversation around the use of AI in the criminal justice system. There remains a larger discussion about the use of AI by police, as well as the privacy considerations that plague the integration of artificial intelligence in government as a whole. These preliminary regulations would, however, work to address the issue of AI in the judiciary pending more substantive changes. With the acceleration of the AI boom, the unregulated usage of these so-called “risk assessment tools” will only become more of a risk in-and-of-itself. | |
> > |
Calling everything done by statistical analysis software "AI" is surely a serious form of category error. We will then have gone from AI meaning "general artificial intelligence,"—which we don't have anywhere and which we don't have any precursor technologies for building, either—to all forms of "decision support software," which indeed by that definition I have been writing since I was fourteen.
The difference between "decision support software" and "decision-making software" is not inherent in the software. That's not an abstruse point: we all understand that human systems can rely on mechanical, mathematical or stochastic tools to make decisions. Ordeals and oracle bones are not "AI." They are, however, equally superstitious invocations of imagined divinity to perform a task which, intensely human as law-doing is, requires the people, the time and the resources necessary to its skilled accomplishment.
None of this is, as I say, difficult to put into a few hundred simple words. That both allows us to skip the "how do we regulate AI" conversation currently convulsing the class of people who believe their fundamental ignorance is a public resource, and concentrates our attention on the question we are really asking: is an illusion of precision created by reliance on "decision support" math causing avoidable social harm? Both with respect to sentencing policy and to the flawed statistical algorithms depriving public school teachers of "merit pay" under their contracts, we can probably identify dozens or hundreds of examples, each involving deprivations of life, liberty and property for large numbers of individuals and households caused by flawed automation of social, judicial and medical decision-making.
These are failures of a form of civil engineering involving our understanding (or misunderstanding) of software as a building material in the infrastructure of digital society. The arguments Zuboff makes in "The Age of Surveillance Capitalism, which I assigned but which you don't discuss, shed some important light on why failures of this sort will increase in frequency and severity in the near future, and why those harms are unequally class-distributed. It helps in thinking about remedies to have a fully-scaled understanding of the problem.
Surveillance capitalism relies on decision support for decision-making because it reduces labor's share of the resulting product at a cost either borne by labor directly or externalized to the benefit of capital. Prioritizing social outcomes over a particular division of profits results in changes in the resource management (privacy maintenance over strip-mining of human consciousness, for example) and decision support strategies of government.
When I brought one PC to Chambers in the SDNY for the first time in 1985 (and then a little tablet-sized keyboard for typing notes and saving text into the very courtroom itself) I greatly increased the productivity of two extremely hard-working law clerks. But you can be quite sure that the Judge saw to it that no machine would ever be making decisions in his court. That was why he also famously refused to read—and ostentatiously returned resealed to the Federal Probation Service in the original envelope—any presentence report that contained a numerical recommendation of any kind. You are, in that sense, pursuing an inquiry into culture rather than technology.
| | Sources: | |
> > |
Why aren't these links, anchored to the places in the text where the reader would want to reach them in the usual way, with a click, rather than being sequestered here? We are writing for the Web, where hypertext is supposed to make things easy for readers. Let's use it.
| | Hillman, Noel L. “The Use of Artificial Intelligence in Gauging the Risk of Recidivism.” American Bar Association, 1 Jan. 2019, www.americanbar.org/groups/judicial/publications/judges_journal/2019/winter/the-use-artificial-intelligence-gauging-risk-recidivism
Garrett, Brandon, and John Monahan. “Assessing Risk: The Use of Risk Assessment in Sentencing .” Bolch Judicial Institute at Duke Law, vol. 103, no. 2, 2019. |
|
JasmineBoviaFirstEssay 1 - 22 Oct 2023 - Main.JasmineBovia
|
|
> > |
META TOPICPARENT | name="FirstEssay" |
PSRs, RAIs, and the Fight Against AI
Introduction:
Although Artificial Intelligence models have existed in some form since the 1950s, 2022 marked the beginning of what has now become known as the “AI Boom”, a term used to describe the rapid expansion of Artificial Intelligence usage into the mainstream. This technological boom, spurred by the use of large-language models like ChatGPT? ? and Meta Platforms, has become increasingly observable not only in the public sphere, but in a number of professional fields such as journalism, medicine, and, notably, law. This paper seeks to examine the potentially negative consequences of AI usage on the legal sector, specifically the judiciary. Further, it suggests some preliminary measures to limit, if not completely curb, the role AI plays in judgment.
AI and the Judiciary:
While the usage of Artificial Intelligence within the entire legal sphere has been met with rightful controversy, AI’s effect on the judiciary is especially troubling. According to the American Bar Association, numerous states have begun incorporating AI models into the judicial practice as an evaluation tool meant to aid in the generation of Pre-Sentence Reports (PSRs). Risk Assessment Tools are one specific class of AI model that rely on fact patterns and outcomes of previous cases to calculate metrics such as recidivism potential for criminal defendants. These metrics play an increasingly instrumental role in PSRs and, consequently, the sentencing outcomes of criminal cases. Sentencing courts have become increasingly reliant on these AI models to disastrous effect; already, the use of this software in PSR generation has been the subject of legal challenges on Due Process grounds. An investigative article published by ProPublica? ? highlighted one of the glaring issues with state judiciaries’ use of AI tools in criminal cases. Although limited data currently exists on these AI models, studies are beginning to show that risk assessment tools perpetuate racial bias in their assessments. The risk recidivism software COMPAS, developed by the for-profit company Equivant, serves as a shining example; Black defendants were almost twice as likely as white defendants to be wrongfully labeled as having a “high-risk” of recidivism. On the flipside, white defendants were much more likely than Black defendants to be incorrectly considered at “low-risk” of reoffense. This is far from the only problem with Artificial Intelligence models like COMPAS. Another potential issue with sentencing courts’ use of these tools is one inherent to their very nature. Artificial intelligence learns by constantly adapting its output to expanding data sets. These ever-evolving algorithms could mean skewed results for defendants as more data becomes available; the machine’s determination of a fair sentence for a defendant one day can, in theory, be completely different from its determination of a fair sentence for a future defendant with an identical fact pattern. Even further, the American Bar Association correctly posits that the use of computer-generated evaluations for determining matters such as recidivism risk removes the necessary human aspect of sentencing. Where human judges are better able to see beyond fact patterns and take more nuanced views of the defendants in front of them, AI software can only see the numbers, resulting in distressingly clinical results. With these problems in mind, it is understandable to see why the use of AI tools within the judiciary remains controversial.
Preliminary Measures:
Barring an absolute moratorium on the use of AI tools in the judiciary, which would be difficult to enforce in practice, there are mitigating measures that may be taken to minimize the negative impacts of risk assessment instruments (RAIs) on the sentencing process. For one, regulation could look like limiting what factors go into determining matters like risk recidivism in defendants. Currently, tools like COMPAS utilize information relating to a defendant’s identity when calculating risk factors– including their race, sex, and age. To avoid integrating the same biases that plague the current sentencing process into the RAI algorithms, developers should be explicitly required to exclude these demographics. Further, developing companies of RAIs should be required to publicize what considerations go into their pre-sentencing reports and risk assessments. The confidential nature of RAIs has already been the subject of legal challenge; in Loomis v. Wisconsin, a defendant raised arguments against the COMPAS software for, inter alia, not reporting what data went into the generation of his risk assessment, making it impossible to challenge the instrument’s accuracy and validity. His point was entirely valid; if pre-sentencing reports are to be made accessible to parties of a case, why should other investigative tools, like the risk assessment algorithms that help generate such reports, not be made available and open to scrutiny and potential challenge on due process grounds? Lastly, software developers should be required to analyze the algorithmic outputs of the software that they create, and publish both their process and results. In order for there to be greater transparency and scrutiny in the judiciary’s use of AI, all stakeholders need to hold equal responsibility, and accountability, for potential failings and shortcomings of the risk assessment tools. Allowing developers to gain financially from the use of their algorithms in the sentencing process without any actual stake in the outcomes will work to disincentivize them from ensuring that their models are accurate, reliable, and nondiscriminatory. While the ultimate responsibility of case outcomes should lie with the government, any party that has a stake in criminal cases should bear at least some accountability for the execution, or lack thereof, of justice. These solutions are only launching points for a longer conversation around the use of AI in the criminal justice system. There remains a larger discussion about the use of AI by police, as well as the privacy considerations that plague the integration of artificial intelligence in government as a whole. These preliminary regulations would, however, work to address the issue of AI in the judiciary pending more substantive changes. With the acceleration of the AI boom, the unregulated usage of these so-called “risk assessment tools” will only become more of a risk in-and-of-itself.
Sources:
Hillman, Noel L. “The Use of Artificial Intelligence in Gauging the Risk of Recidivism.” American Bar Association, 1 Jan. 2019, www.americanbar.org/groups/judicial/publications/judges_journal/2019/winter/the-use-artificial-intelligence-gauging-risk-recidivism
Garrett, Brandon, and John Monahan. “Assessing Risk: The Use of Risk Assessment in Sentencing .” Bolch Judicial Institute at Duke Law, vol. 103, no. 2, 2019.
State v. Loomis, 371 Wis. 2d 235 (2016)
Angwin, Julia, et al. “Machine Bias.” ProPublica? ? , 23 May 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
18 USC Sec. 3552(d) |
|
|