Previewing and Reviewing Pretrial Risk Assessment RCTs

On Tuesday, Jan. 16 pretrial staff in Polk County, Iowa entered their offices with a slightly different charge. They had been accustomed to perusing a list of arrestees scheduled for first appearance and searching for individuals who qualified for an interview and pre-disposition release. That morning, some staff members continued this time- and resource-intensive practice. Others reviewed administrative records and entered nine risk factors into a new software system that calculates (hopefully familiar to readers of this blog) PSA risk scores. Polk County is the first jurisdiction in Iowa to implement the PSA. Three more counties will join them in the coming months as pilot sites, and eventually the entire state will adopt it.

As the A2J Lab looks ahead to launching its second RCT evaluation of the PSA, we came across a study of its progenitor, the Virginia Pretrial Risk Assessment Instrument (“VPRAI”). When the VPRAI arrived in courtrooms around the state, there was no way to convert risk predictions into actionable release recommendations. (That fact stands in stark contrast to the Decision-Making Framework accompanying the PSA.) The solution was the Praxis, “a decision grid that uses the VPRAI risk level and the charge category to determine the appropriate release type and level of supervision.” Virginia pretrial staff also embraced the so-called Strategies for Effective Pretrial Supervision (“STEPS”) program to “shift the focus . . . from conditions compliance to criminogenic needs and eliciting prosocial behavior.” The combination of these innovations, it seemed, would improve Virginia’s ability to pinpoint risk and reduce failure rates during the pre-disposition period.

Marie VanNostrand of Luminosity and two co-authors were interested in understanding, first, the VPRAI’s predictive value. Second, they assessed the benefits of the Praxis and STEPS program through a randomized study design. Unlike the A2J Lab’s field experiments, which usually take individuals as the units of randomization, the Virginia study randomized entire pretrial services offices to one of four conditions: (1) VPRAI only; (2) VPRAI + Praxis; (3) VPRAI + STEPS; and (4) VPRAI + Praxis + STEPS. The authors then used this exogenous (nerd speak for “completely external”) source of variation to analyze staff, judicial, and defendant responses.

The results were quite favorable for the introduction of the Praxis as well as for the VPRAI itself. One estimate suggested that higher VPRAI risk scores correlate strongly with higher actual risk. About two-thirds of the time, if one were to pick two defendants at random–one who failed and one who didn’t–the one who failed would have a higher VPRAI score. Pretrial services staff who had access to the Praxis also responded to its recommendations. Their concurrence (agreement) rate was 80%, and they were over twice as likely to recommend release relative to staff who did not have the decision grid. Next, the availability of the Praxis (versus not having it) was associated with a doubling of the likelihood that judges would release defendants before disposition.

What about defendant outcomes? The authors found that the availability of the Praxis was associated with a lower likelihood of failing to appear or being arrested for a new crime. STEPS alone had no discernible effect.

The VPRAI study suggests a few lessons for our ongoing pretrial risk assessment work, including in Iowa. First, we continue to emphasize that the tool under investigation, the PSA, is far from a cold, lawless automaton, as many commentators seem to worry. Yes, algorithms produce scores, and decision matrices generate recommendations. But human beings must still consider that evidence alongside their own human judgment. One hope is that such evidence will enhance the quality of judges’ decision-making. For now, we just don’t know; that’s the reason for our PSA RCTs. Relatedly, we think that final verdicts on actuarial risk assessments should await reports like the VPRAI study and the A2J Lab’s growing portfolio of evaluations. There will always be local policy issues deserving of debate and attention. However, we need strong evidence for or against these tools’ value before praising or condemning them wholesale. Finally we should, as always, evaluate this brave new world reliably. That means deploying, where possible, principles of experimental design. RCTs, simply put, represent our best shot at understanding causal relationships.

Stay tuned for more updates from Iowa and beyond!

Guardianship Service of Process

The Problem

Service of process can be a very complicated step in obtaining legal guardianship. The phrase “service of process” alone is a confusing one.

Petitioners, most of whom are not lawyers, have to: (1) identify “interested parties,” many of whom are not obvious candidates; (2) determine the proper method of service; (3) effectuate service; and (4) return proof of service to the Probate and Family Court. Completing the process exactly as described is equally important. Service isn’t just a legal formality; it’s a crucial part of the petition. If interested parties, i.e., those who might want to contest the petition aren’t notified, due process concerns would arise. Failing to serve within the prescribed timeline will stall the petition.

People who have gone through the process have described the paperwork as overly complicated, repetitive, and time-consuming. Worse, a significant number of petitioners fail to reach a judge at all—not because of the substance or validity of their case, but because they have failed to overcome the procedural hurdles standing in the way of having a case heard on the merits.

Current Solutions

Courts and legal aid organizations provide individual assistance explaining court procedures. Courts have, for example, made attempts at drafting checklists or other instructions about the process. Many legal service providers develop their own self-help materials or employ different techniques to get litigants to remember at least some of these very complicated steps. Some tell litigants to come back once they receive a new piece of mail from the court, so that the next step can be explained to them in a way that is more concrete and obvious. Repeat visits, in-person explanations, and drafting instructions all take significant time and energy that attorneys could otherwise spend assisting more court users. Are these solutions having any effect on litigants’ ability to navigate the court procedure and get their first hearing in front of a judge?

The Study

In partnership with the Boston Court Service Center and the Volunteer Lawyers Project of the Boston Bar Association, the Lab’s Guardianship Service of Process study evaluates whether self-help materials can make a difference for court users navigating the complex web of court procedures to initiate a guardianship case.

The Guardianship Service of Process Study, which launched in early September 2017, tests Lab-designed self-help materials. Participants receive printed materials (developed in large part at our first hackathon) on a randomized basis for both adult or minor guardianship cases and in English or Spanish. In addition, minor guardianship petitioners randomized to receive the hard copy booklets will also gain access to an online tool developed by Bill Palin, the Access to Justice/Technology Fellow with Harvard Law School’s clinical programs. That site walks users through their case to provide personalized instructions, using new guided interview software similar to TurboTax. The RCT will compare rates of successful service, among other outcomes, between the treatment and control groups.

If self-help packets or a new tech tool can help people file for guardianship and then correctly complete service of process, then legal services providers know what types of resources to invest in and how best to allocate their limited resources. And if the self-help materials aren’t at all effective, perhaps we can learn something about the procedural hurdles and have a better understanding of how these hurdles themselves may need to change.

 

The Research Team

Jim Greiner, Faculty Director, The Access to Justice Lab; Professor of Law at Harvard Law School

Chris Griffin, Research Director, The Access to Justice Lab

Erika Rickard, Associate Director of Field Research, The Access to Justice Lab

With intervention design thanks to our affiliates,

Bill Palin, Developing Justice, Harvard Law School

Hallie Jay Pope, Graphic Advocacy Project

Grading School Voucher Programs

Why RCTs?: School Choice

Everyone seems to have an opinion on school choice. Those favoring or trying to forestall the dismantling of residential barriers have fought loud, hard battles in the states. Interestingly, these battles haven’t necessarily pitted political partisans against each other. The “choice” bloc recently witnessed a vocal spokesperson, Education Secretary Betsy DeVos, rise to prominence. She has advocated passionately for implementing more voucher systems and giving parents and students more perceived opportunity to succeed where the current public school system, some claim, clearly cannot.

A2J Lab “Behind the Experiment”: Dane County Part III

Today we present a final look at key field partners who have helped make the Dane County PSA RCT one of the A2J Lab’s signature series. Check out Part I and Part II as well!

In this installment, we are fortunate to share reflections from three officers of the court: Judge Juan Colás, Commissioner Jason Hanson, and Dane County District Attorney Ismael Ozanne.

A2J Lab “Behind the Experiment”: Dane County Part I

Last week, we marked the launch of our PSA RCT in Dane County, Wisconsin. Starting today, I will be pulling back the field experiment curtain, as it were, and introducing some of the A2J Lab’s field partners. These Dane County employees have worked tirelessly for almost two years to make the PSA’s implementation and our concurrent evaluation possible.

Ready, Set, Launch

Research Director Chris Griffin blogs from Wisconsin:

The day has finally arrived!

At this afternoon’s initial appearance court in Dane County, WI, the A2J Lab begins its evaluation of the Public Safety Assessment (“PSA”). Criminal process in this jurisdiction now includes additional, scientifically based information in a randomly selected subset of cases to inform pre-disposition release decisions. The judicial official–known here as a Commissioner–receives risk scores and a recommendation for release through the PSA and its static criminal history inputs to consider in reaching those decisions. Check out this video starring Lab affiliate Heidi Liu and yours truly to learn more about the science behind this RCT:

On Your Mark, Get Set, Triage

Part 2 of “To Triage of Not to Triage? That is NOT the Question.”

Last week I took another dive into the world of triage- specifically focusing on some common questions and sticking points that were raised in RadioLab podcast entitled “Playing God.” As was mentioned in the previous blog post, we don’t think triage is really about playing god, rather about facing limited resources and making decisions. Last week we talked mostly about the value implications of such discussions of who lives and who dies. This week we’ll touch upon two other points. First, the reaction to not want to make triage decisions, and the second is the multitude of ways to triage and therefore the importance of RCTs in knowing which way is best in a given situation.

“To Triage or Not to Triage?” That is NOT the Question

Part 1

The Radiolab podcast from WNYC Studios is as close to appointment listening as we have in 2016. One of the show’s recent episodes, entitled “Playing God,” takes up a topic directly in the A2J Lab’s wheelhouse: triage. In a stark bit of commentary, the host characterizes the practice not as deciding how to allocate scarce resources; rather he described it as an “inhuman act which humans are trying to do.”

Fear and Loathing over Risk Assessments Part 2

How Should We Think about Racial Disparities?

In a previous post, I considered some of the less convincing critiques of pretrial and sentencing risk assessments that sound in the ecological fallacy. The fallacy argument mistakenly targets risk scores as applying only group inferences to individual case decision-making. The takeaway was straightforward. A comprehensive understanding of actuarial tools must include rigorous counterfactual thinking about a state of the world in which they aren’t available. In this follow-up, I discuss an even more serious claim: that actuarial tools might lead to unjustifiable racial disparities in criminal justice outcomes.