A pair of papers that crossed my desk recently, both circling around a similar topic and yet providing slightly different perspectives and analyses, reflects a palpable increase in scholarly attention to the various intersections involving AI, machine learning, algorithms, and legal doctrine. As both papers note, "predictive algorithms are increasingly being deployed in a variety of settings to determine legal status." To date, for example, "there are approximately 60 risk assessment tools deployed in the criminal justice system. These tools aim to differentiate between low-, medium-, and high-risk defendants and to increase the likelihood that only those who pose a risk to public safety or are likely to flee are detained."
"Proponents of actuarial tools claim that these tools meant to eliminate human biases and to rationalize the decision-making process by summarizing all relevant information in a more efficient way than the human brain. Opponents of such tools fear that in the name of science, actuarial tools reinforce human biases, harm defendants’ rights and increase racial disparities in the system. The gap between the two camps has widened in the last few years."
One paper, Dan Burk's (UC-Irving), Algorithmic Legal Metrics, seeks to: "link the the sociological and legal analysis of AI, highlighting the reflexive social processes that are engaged by algorithmic metrics. This paper examines these overlooked social effects of predictive legal algorithms, and contributes to the literature a vital fundamental but missing critique of such analytics. Specifically, this paper shows how the problematic social effects of algorithmic legal metrics extend far beyond the concerns about accuracy that have thus far dominated critiques of such metrics. Second, it demonstrates that corrective governance mechanisms such as enhanced due process or transparency will be inadequate to remedy such corrosive effects, and that some such remedies, such as transparency, may actually exacerbate the worst effects of algorithmic governmentality. Third, the paper shows that the application of algorithmic metrics to legal decisions aggravates the latent tensions between equity and autonomy in liberal institutions, undermining democratic values in a manner and on a scale not previously experienced by human societies. Illuminating these effects casts new light on the inherent social costs of AI metrics, particularly the perverse effects of deploying algorithms in legal systems."
Another paper, doaa Abu Elyounes' (Harvard—Berkman Center), Bail or Jail? Judicial Versus Algorithmic Decision-Making in the Pretrial System, delves a bit more concretely into the criminal law realm. The paper "examines the role that the technology play in this debate, and whether deploying AI in existing risk assessment tools realizes the fears hyped in the media or improves our criminal justice system? It focuses on the pretrial stage and examines in depth the seven most commonly used tools. Five of these tools are based on traditional regression analysis, and two have a certain machine-learning component. The paper concludes that, classifying pretrial risk assessment tools as AI-based tools creates the impression that sophisticated robots are taking over the courts and pushing judges from their jobs, but this is far from reality. Despite the hype, there are more similarities than differences between tools based on traditional regression analysis and tools based on machine learning. Robots have a long way to go before they can replace judges, and this is not the solution that this paper is arguing for."
Recent Comments