| |
JasmineBoviaFirstEssay 4 - 21 Jan 2024 - Main.EbenMoglen
|
|
META TOPICPARENT | name="FirstEssay" |
The Problem with RAIs | | Here, it is clear that the Court has fallen for the software’s “illusion of precision”; the Court in this instance signals that it is willing to allow discrimination against private citizens in the interest of bolstering the usage of what it believes to be a more accurate, precise decisionmaker. Thus, the Wisconsin Supreme Court errs in precisely the way that Zuboff predicts. Avoidable discrimination against people of certain genders, races, and socioeconomic statuses are and will continue to be enforced so long as there remains a fundamental misunderstanding of what AI is, what it entails, and how exactly decision support software like RAIs differ from artificial intelligence.
| |
> > |
I think a better draft would be more than 640 words long. Almost a quarter of the draft is spent laying out the procedural history of a case you want to discuss, but whose specific holding you compress into half a sentence, and whose reasoning, of which you are dismissive, you do not analyze or quote. Zuboff, too, is mentioned, but not actually cited, quoted or discussed. Any statistical argument about the real world can be harmed by excess precision, and "computer says no" reasoning has been the hallmark of bureaucracy (with or without computers) since Mesopotamia, at least. And to say that anyone in particular "predicts" that the incidence of power will tend to disfavor those who are already disfavored (Xs in a world of Ys, whatever X and Y may be) is rather like awarding credit for predicting sunrise.
Clearer argument would therefore help, too. Can we create systems for assessing risk of future criminal behavior, or is such an effort inherently impossible, such that even if the law requires our public agencies to do so they should refuse to make such systems? If they are possible, among the flaws from which they can be expected to suffer are inaccuracy and bias. Both inaccuracy (for example, forecasting 50% chance of rain 50% of the time when it only rains 10% of the time in fact), and bias (forecasting rain twice as often on Mondays as on Fridays) can exist without making weather forecasts a bad idea. We can also suppose that there are situations in which measures to improve accuracy will also increase bias, or in more general terms, that improvement doesn't always mean achievement of the optimal. So some principles should emerge from the inquiry. If "due process" and "equal protection" are the labels we apply to those principles, what should be in each jar?
| | \ No newline at end of file |
|
|
|
This site is powered by the TWiki collaboration platform. All material on this collaboration platform is the property of the contributing authors. All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
|
|
| |