On Tuesday, Jan. 16 pretrial staff in Polk County, Iowa entered their offices with a slightly different charge. They had been accustomed to perusing a list of arrestees scheduled for first appearance and searching for individuals who qualified for an interview and pre-disposition release. That morning, some staff members continued this time- and resource-intensive practice. Others reviewed administrative records and entered nine risk factors into a new software system that calculates (hopefully familiar to readers of this blog) PSA risk scores. Polk County is the first jurisdiction in Iowa to implement the PSA. Three more counties will join them in the coming months as pilot sites, and eventually the entire state will adopt it.
As the A2J Lab looks ahead to launching its second RCT evaluation of the PSA, we came across a study of its progenitor, the Virginia Pretrial Risk Assessment Instrument (“VPRAI”). When the VPRAI arrived in courtrooms around the state, there was no way to convert risk predictions into actionable release recommendations. (That fact stands in stark contrast to the Decision-Making Framework accompanying the PSA.) The solution was the Praxis, “a decision grid that uses the VPRAI risk level and the charge category to determine the appropriate release type and level of supervision.” Virginia pretrial staff also embraced the so-called Strategies for Effective Pretrial Supervision (“STEPS”) program to “shift the focus . . . from conditions compliance to criminogenic needs and eliciting prosocial behavior.” The combination of these innovations, it seemed, would improve Virginia’s ability to pinpoint risk and reduce failure rates during the pre-disposition period.
Marie VanNostrand of Luminosity and two co-authors were interested in understanding, first, the VPRAI’s predictive value. Second, they assessed the benefits of the Praxis and STEPS program through a randomized study design. Unlike the A2J Lab’s field experiments, which usually take individuals as the units of randomization, the Virginia study randomized entire pretrial services offices to one of four conditions: (1) VPRAI only; (2) VPRAI + Praxis; (3) VPRAI + STEPS; and (4) VPRAI + Praxis + STEPS. The authors then used this exogenous (nerd speak for “completely external”) source of variation to analyze staff, judicial, and defendant responses.
The results were quite favorable for the introduction of the Praxis as well as for the VPRAI itself. One estimate suggested that higher VPRAI risk scores correlate strongly with higher actual risk. About two-thirds of the time, if one were to pick two defendants at random–one who failed and one who didn’t–the one who failed would have a higher VPRAI score. Pretrial services staff who had access to the Praxis also responded to its recommendations. Their concurrence (agreement) rate was 80%, and they were over twice as likely to recommend release relative to staff who did not have the decision grid. Next, the availability of the Praxis (versus not having it) was associated with a doubling of the likelihood that judges would release defendants before disposition.
What about defendant outcomes? The authors found that the availability of the Praxis was associated with a lower likelihood of failing to appear or being arrested for a new crime. STEPS alone had no discernible effect.
The VPRAI study suggests a few lessons for our ongoing pretrial risk assessment work, including in Iowa. First, we continue to emphasize that the tool under investigation, the PSA, is far from a cold, lawless automaton, as many commentators seem to worry. Yes, algorithms produce scores, and decision matrices generate recommendations. But human beings must still consider that evidence alongside their own human judgment. One hope is that such evidence will enhance the quality of judges’ decision-making. For now, we just don’t know; that’s the reason for our PSA RCTs. Relatedly, we think that final verdicts on actuarial risk assessments should await reports like the VPRAI study and the A2J Lab’s growing portfolio of evaluations. There will always be local policy issues deserving of debate and attention. However, we need strong evidence for or against these tools’ value before praising or condemning them wholesale. Finally we should, as always, evaluate this brave new world reliably. That means deploying, where possible, principles of experimental design. RCTs, simply put, represent our best shot at understanding causal relationships.
Stay tuned for more updates from Iowa and beyond!