|
META TOPICPARENT | name="SecondEssay" |
|
| If we think about systems predicting court appearance, their use does not always result in incarceration. They can be seen as a tool for court to bring defendants to them. However, it raises concerns about how judges use such algorithms.
What would you do if you were a United States judge who has to decide bail for a black man, a first-time offender, accused of a non-violent crime. |
|
< < | An algorithm just told you there is a 100 percent chance he'll re-offend.Would you rely on the answer provided by the algorithm, even if your personal thought leads you to another answer? Is the algorithm answer more reliable than your opinion? One could argue that the answer provided by the algorithm is based on 137 question-quiz. |
> > | An algorithm just told you there is a 100 percent chance he'll re-offend.Would you rely on the answer provided by the algorithm ? Is the algorithm answer more reliable than your opinion? One could argue that the answer provided by the algorithm is based on 137 question-quiz. |
| The questions are part of a software program called Correctional Offender Management Profiling for Alternative Sanctions: COMPAS.
The problem when we rely on such algorithms is their opacity |
|
< < | How could a judge rely on such a tool, when he does not even know how it works? Concerns have been raised about the nature of the inputs and how they are weighted by the algorithm. We all know that human decisions are not all the times the good ones, perfect. They depend on the person deciding, its opinion, ideas, and his background. But at least it is a human that decides, individualizing each case, each person. Whereas an algorithm is incapable of doing such a thing. |
> > | Concerns have been raised about the nature of the inputs and how they are weighted by the algorithm. We all know that human decisions are not all the times the good ones, perfect. They depend on the person deciding but at least he is individualizing each case, each person. Whereas an algorithm is incapable of doing such a thing. |
|
|
| Towards a computerized justice:what improvements can we make ?
|
|
< < | Let’s focus on another algorithm created by researchers from Sheffield and Pennsylvania universities, which is capable of guessing court decisions by using parties’ arguments and the relevant positive law. The results of this algorithm have been published in October 2016. It was an outstanding result: this algorithm made the same choice than human-judge eight times in ten.
The 21% margin of error of the judge robot does not necessarily mean he was wrong, but rather he said differently the law compared to a human-judge. |
| |
|
< < | Such results could encourage us to think the algorithm-judge is the future for predicting judicial decisions.
Such algorithms, if we could create some perfectly reliable, would enable us to obtain a decision the most conform to the positive law. It would be a pledge of judicial security and would allow standardizing judicial decisions. A computerized justice could help judges to focus only on complicated case law and would help to unblock tribunals and make justice faster. With respect to all those possible advantages, can the justice reasonably answer to an implacable mathematical logic? I don’t think so, it depends on the fact, on the defendants…
We can’t rely on an algorithm and we should not. Algorithms will never be able to replace humans because they lack the human ability to individualize since they can’t see each person as one kind.
If we think about the risk assessment algorithm in sentencing, it is also dangerous for due process.
The Loomis case challenged the use of COMPAS as a violation of the defendant’s due process rights. In this case, the defendant argued that the risk assessment score violated his constitutional right to a fair trial. For the defendant, it violated his right to be sentenced based on accurate information and also violated his right to an individualized sentence The court held that Loomis’s challenge did not clear the constitutional hurdles.
When an algorithm is used to produce a risk assessment, the due process question should be whether and how it was used. The Wisconsin Supreme Court stated that the sentencing court “ would have imposed the exact same sentence without it. |
> > | Algorithms are mostly used in two ways: to estimate a defendant’s flight risk, and to assess his or her threat to public safety. We can disprove their utility because algorithms are biased. |
| |
|
< < | It raises concerns if the use of a risk assessment tool at sentencing is only appropriate when the same sentencing decision would be reached without it, this suggests that the risk assessment plays absolutely no role in probation or sentencing decisions. |
> > | One of the improvements to try to fix algorithms is through an administrative solution which includes regulatory oversight. It has been implemented in 2018, in New York with the first algorithmic accountability law in the nation.
The goal of the law is fairness, accountability, and transparency. In order to do so, the law seeks to create a group of experts who will identify automated decision systems’ disproportionate impacts.
The law will allow anyone affected by an automated decision to request an explanation for the decision and will require a path for redress for anyone harmed by a decision. However, there is an issue with this law since no compliance with it is required if it would result in the disclosure of proprietary information.
One of the biggest problems with algorithms is that they are based on math, not justice. If we want an algorithm to predict something, we have to represent concepts in terms of numbers for it to process information. Issues arise from the fact that concepts like fairness and justice can’t be represented in maths because they always evolve, are subject to debates and public opinion. There is no one metric to determine fairness. We can try to change the way we design algorithms by using gender-specific risk assessments for example but disparities in the treatment of different groups will always exist.
It should be decided when designing algorithms, what kind of fairness is important to prioritize? What threshold of risk should be used for release, and what kind of risk makes sense to measure in this context?
Finally, the main issue remains the lack of transparency. In fact, private providers of algorithms have asserted trade secret protections in criminal cases to protect their intellectual property. As a result, defendants are denied the right to look at the data and source code that is being used to keep them incarcerated. Without having real access to the system inside algorithms, we can say what is wrong but it is harder to say how things should be changed since we don’t clearly know how those programs run. |
| |
|
< < | |
| |
|
< < |
You might want to go back and look at Jerome Frank's 1949 argument in Courts on Trial about why computational judging is impossible. He was right then and nothing has changed, even though there were only a couple of digital computers in the world and no software to speak of when he wrote. But as I said, disproving the utility of algorithms for judging is trivial anyway: that's not what we need computer programs for in a justice system. So it would be better either to figure out what computational improvements we can make in the justice system and whether they are worth it, or look for other sources of injustice in the justice system, out of which we are unlikely to run, because—as you say—justice is a human project and we are very fallible.
|
| |