Law, AI, and Justice

Yesterday, Research Director Chris Griffin spoke with three other Harvard scholars as part of a HUBweek 2017 panel sponsored by the Berkman Klein Center: Programming the Future of AI: Ethics, Governance, and Justice. The four debated the promises and perils of using computer models and algorithms to guide legal decision-making. The Boston Globe‘s article about the event captures the core questions succinctly: “Should sophisticated computer models help judges predict which defendants are safe enough to release before trial? Or should judges rely on their own wisdom, discretion, and experience to make those decisions?”

Such questions are at the heart of the Lab’s work testing the Public Safety Assessment (“PSA”) in Dane County, Wisconsin (and potentially more sites). The PSA is an actuarial risk assessment that applies an algorithm to static criminal history factors and recommends whether and under what conditions someone should be released prior to disposition. While not an example of artificial intelligence–itself a topic for separate debate!–the PSA does raise similar questions regarding how algorithmic models should influence human decisions in law and whether or not those influences can yield more just outcomes. We hope our study will help provide some answers.

The sold-out event lasted about one hour; if you missed it live, you can catch the full conversation here!

Leave a Reply

Your email address will not be published. Required fields are marked *