Support the A2J Lab

Your support is greatly appreciated!

Read More

In the News

Stay up to date with the latest happenings

Read More

Read the Blog

Keep up with the Lab on our blog

Read More

Chris Griffin featured on KPCC’s Air Talk

Earlier this week, A2J Lab Research Director Chris Griffin joined L.A. NPR affiliate KPCC for their Air Talk program. Listen to the audio online to learn more about pretrial risk assessment scores and the Lab’s work evaluating the Public Safety Assessment.

The thirteen-minute segment is a great window into the wider conversation about the use of actuarial tools in legal decision-making. Chris participated in another discussion about the use of the PSA earlier this fall.

Do you have thoughts on the topic? Join the conversation by sharing your ideas in the comments section.

Spotlight: Drug Courts

According to the National Institute of Justice, there are more than 3,000 drug courts across the United States.[1] As designed, these courts aim to rehabilitate drug-addicted offenders, reduce recidivism, and potentially reduce costs to the system through lower incarceration rates. Yet only a handful of studies have implemented a randomization design to rigorously evaluate the effectiveness of such programs. Given that the results are not yet entirely clear due to variability in program design and research methodologies, the A2J Lab thinks this would be a great area of continued, rigorous research.

First, what exactly is a drug court? Drug courts are specialized treatment courts that offer monitored treatment, drug testing, and other services to offenders, with intensive court supervision as a key component. The treatment programs usually involve counseling, therapy and education, drug tests, and other services that target vocational, educational, family, health, and other issues. Meanwhile, intensive court supervision encourages compliance through status hearings and a system of rewards and sanctions. It is also believed that the intense relationship with the judge increases offender perceptions of procedural justice, which is thought to provide an additional deterrent to crime through trust and confidence in the judicial system.

To date, four randomized studies of drug courts have taken place (see this chart for details), and together they suggest that there may be some positive effects of drug court participation on drug use and recidivism, although it is quite unclear how long these effects last after the offenders complete the programs. Additionally, variation in the programs themselves as well as the outcomes collected make it hard to understand what to make of the findings. Notably, the studies vary in terms of the timing of the intervention (pretrial, post-adjudication, post-conviction), participant eligibility, comparison groups, program length, and timing of outcome collection, as well as the definition of recidivism.

Broadly speaking, the studies suggest:

  • An Arizona study of a post-adjudication drug court involving probationers with a first time felony conviction for drug possession found lower re-arrest rates and technical violations among drug court participants when measured 36 months after the randomization. When measured 12 months after randomization, the re-arrest rate differences were not found and the technical violation differences were only found for drug-related technical violations.
  • A Washington, D.C. study of a pretrial drug court program involving drug felony defendants found reduced drug use during the program (within 12 months of randomization), though this effect was not sustained for the year after sentencing. This study did not reveal significant differences in re-arrest rates for the drug court program compared to the standard docket a year after sentencing.
  • A Maryland study of a post-conviction drug court program involving drug-involved, non-violent offenders found that drug court participants reported using fewer types of drugs than the control group 36 months after the randomization and were less likely to be rearrested during the program and within 24 months of randomization. By 36 months after the randomization, 3/4ths of the study participants had been re-arrested regardless of their group assignment.
  • An Oregon study of a post-adjudication drug court program involving offenders on probation, parole, and post-prison supervision found that 12 months after randomization the drug court group had fewer new charges, including fewer new drug-specific charges than the control group.

Given the prevalence of drug courts and the lack of certainty regarding whether they are achieving their intended goals, the A2J Lab would like to see more randomized studies in this arena. And, specifically, we think there is a need for carefully describing, defining, and measuring the processes and activities that occur in drug courts, what the key ingredients are, and how to determine the short and long term outcomes on offenders and public safety. Do you work with or within a drug court? What are you hoping it accomplishes? Would you be interested in finding out whether it achieves those aims?  The A2J Lab would be happy to work with interested field partners or researchers to tackle this topic!

[1] See https://www.nij.gov/topics/courts/drug-courts/pages/welcome.aspx

 

Drug Court RCTs

Although this chart can’t possibly cover all of the important nuances of these studies, it will give you a general sense of the key features of the data collections, study design, and results.

Study Location Offenders Eligibility Program Description Randomization Findings
Maricopa County, AZ[1] Probationers with a first time felony conviction for drug possession Post-Adjudication, rewards include reduction in probation time and fees Randomized to drug court or to routine probation (3 tracks with varied intensity) 1. Drug use: no significant differences
2. Re-arrest: 12 months after randomization, there were no differences between drug court participants and routine probation. At 36 months, however, there were lower re-arrest rates for the drug court group compared to routine probation: drug court participants averaged 2 violations compared to 3 for the routine probation group.
3. Drug-related violations: lower violation rates for treatment group compared to probation tracks at 12 months and 36 months. At 12 months, 10% of the drug court group and 26% of routine probationers had received a drug-related technical violation. At 36 months, this effect was seen for all types of technical violations (not limited to drug-related): 64% of the drug court group and 75.2% of the probationers had received a technical violation.
Washington, D.C. Superior Court[2] Drug felony defendants (not limited to addicts) Pretrial, rewards include reductions to severity of criminal penalties Randomized to drug court, sanctions docket or standard docket 1. Drug use: both the drug court and the sanctions docket exhibited reduced drug use, measured 12 months after randomization, compared to the standard docket. Specifically, 17% of the drug court group and 21% of the sanctions group, compared to 11% of the standard docket tested drug free 12 months after randomization. When asked about their drug use during the year after sentencing, there were no significant differences between the groups, however.
2. Re-arrest: the sanctions group was less likely than standard docket to report having been arrested in year following sentencing: 19% of the sanctions group compared to 27% of the standard docket were re-arrested. This difference was not observed for drug court vs standard docket, however.
3. Other Social Outcomes: no differences overall across a range of social and economic outcomes. Drug court participants were less likely to have accidents with vehicle while under the influence.
Baltimore, MD[3] Drug-involved, non-violent offenders.

 

Referred from one of two tracks: 1) circuit court felony cases supervised by parole and probation and 2) district court misdemeanor cases supervised by parole and probation.

Post-conviction, such that successful completion of the program would result in dropped charges. Randomization to drug court vs control (treatment as usual) 1. Self-Reported Drug use: drug court group self-reported using fewer types of drugs used than control group, 3 years after randomization. Specifically, drug court participants scored lower on the drug variety scale compared to control group (0.14 vs. 0.18) and lower on an alcohol addiction scale as well (1.2 vs. 1.4).
2. Re-arrest: drug court clients were less likely to be rearrested during the program and at 24 months. Within 12 months of the randomization, 48% of the drug court group compared to 64% of the control group were arrested for new offenses. At 24 months following the randomization, 66.2% of the drug court group and 81.3% of the control group were arrested for new offenses.  At 36 months, the difference between the groups was no longer significant; by this point 3/4ths had been re-arrested regardless of their assigned group.Recidivism was found to be lowest among subjects who participate at higher levels in certified drug treatment, status hearings, and drug testing.
3. Time to re-arrest: drug court group had longer time to first re-arrest than control (adjusted for incarceration time). The two groups were essentially identical for the first 4 months, after which the drug court program had a greater impact.
4. Incarceration: no differences in number of days incarcerated, measured at 36 months.
5. Self-reported criminal activity: Drug court group self-reported less involvement in criminal activity than control group 3 years after randomization.
6. Drug-related violations: drug court group was significantly less likely to have been arrested for a drug offense compared to control, measured at 36 months from randomization. Specifically, 55.5% of the drug court group and 68.4% of the control group had a drug charge at 36 months.
7. Other Social Outcomes: drug court group was less likely than control group to be on welfare rolls, 36 months after randomization. Specifically, 4.3% of the drug court group compared to 10.9% of the control group were receiving money from welfare 36 months after the randomization. There were few differences between groups for the other social outcomes measured.
Douglas, Jackson, Multnomah, and Umatilla Counties, OR[4] Offenders convicted of Measure 57 Offenses (felony property or repeat drug delivery offense + 3+ on Texas Christian University drug screen) Post-adjudication, offenders on probation, parole, and post-prison supervision Randomized to drug court vs control (probation) 1. Drug use: at 18 month (post-randomization) follow-up interview, there were no significant differences of self-reported drug use within 30 days of the interview, with one exception: a larger proportion of drug court participants (42% at random assignment and 16% at follow up) relative to control participants (17% and 0%) reported having used marijuana within the most recent 30 days. Across all drug categories except injection drugs, the percentage of users decreased significantly at follow up for both groups, suggesting that court involvement generally may be related to reductions in substance use.
2. New charges: Drug court group had fewer new charges than control group within 1 year of randomization. Specifically, the drug court group had 28% fewer new charges and 26% fewer new cases than the control group.
3. Drug-related charges: Drug court group had 33% fewer new drug charges than the control group within 1 year of randomization.

[1] Deschenes, Elizabeth Piper; Susan Turner and Peter W. Greenwood (1995) “Drug Court or Probation?: An Experimental Evaluation of Maricopa County’s Drug Court” The Justice System Journal, Volume 18, Number 1: 55-73; Turner, Susan et al (2002) A Decade of Drug treatment Court Research. Substance use & Misuse, Vol 37, No 12&13: 1489-1527

[2] Harrell, A.V., Cavanagh, S., & Roman, J. (1998) Final report: Findings from the evaluation of the D.C. Superior Court Drug Intervention Program. Washington, DC: The Urban Institute

[3] Banks, Duren and Denise C Gottfredson (2004) “Participation in Drug Treatment Court and Time to Arrest” Justice Quarterly, Volume 21, Number 3: 637-658; Gottfredson, D.C & Exum, M.L (2002) The Baltimore city Drug Treatment court: One-Year Results form a Randomized Study. Journal of Research in Crime and Delinquency. Vol 39: 337-356; Gottfredson, D.C., Najaka, S.S., & Kearley, B.W. (2003) Effectiveness of Drug Treatment Courts: Evidence From a Randomized Trial. Criminology and Public Policy. Vol 2: 171-196; Gottfredson, D.C., Kearley, B.W., Najaka, S.S., & Rocha, C.M (2005) The Baltimore City Drug Treatment court: three Year Self-Report Outcome Study. Evaluation Review. Vol 29: 42-64; Gottfredson, D.C., Najaka, S.S., Kearley, B.W., & Rocha, C.M (2006) Long-Term Effects of Participation in the Baltimore City Drug Treatment Court: Results from an Experimental Study. Journal of Experimental Criminology. Vol 2; 67-98; Gottfredson, D.C., Kearley, Brook W., Najaka, S.S., Rocha, C.M (2007) How Drug Treatment courts Work: An Analysis of mediators. Journal of Research in Crime and Delinquency. Vol 44, Issue 3; 3-35; Gottfredson, D.C., Kearley, B. W., Brook, W. & ushway, S.D. (2008) Substance Use, Drug Treatment, and Crime: An Examination of Intra-Individual Variation in a Drug Court Population. Journal of Drug Issues. Vol 38, Issue 2. Pp 601.

[4] Oregon Criminal Justice Commission and NPC Research (2015) Randomized Controlled Trial of Measure 57 Intensive Drug Court for Medium- to High-Risk Property Offenders. https://ndcrc.org/resource/randomized-controlled-trial-of-measure-57-intensive-drug-court-for-medium-to-high-risk-property-offenders/

 

Guest Post: Another RCT Tackling Failure to Appear, Part II

Today’s guest post comes from two Harvard Ph.D. students in Public Policy and Economics, respectively, Helen Ho and Natalia Emanuel. Helen and Natalia are affiliates of the Lab and have been working on their own randomized control trial (“RCT”) focused on failures to appear (“FTAs”) for arraignments. This post is the second in a series describing their study.

As we wrote in our last post, we’re interested in situations in which defendants miss their court case, also known as failure to appear (FTA). We’re working with a court system on a randomized controlled trial (RCT) to evaluate interventions that encourage people to show up for their arraignment or resolve their case ahead of time (if their case allows it).

Court staff members developed two postcards that would inform individuals of their court date, their case number, and the address of the courthouse at which their hearing was scheduled. The postcards also let them know about the consequences of missing their court date and accommodations that the court offers, such as free interpreters and the ability to reschedule.

The postcards had similar information, but used different behavioral nudges and designs. The first postcard emphasized that most people resolve their cases successfully, which is a social nudge. The postcard was also signed by the presiding judge, adding an official but personal invitation to resolve their case. The second postcard emphasized that the court was willing to help defendants resolve their case efficiently.

We tested the two postcards in traffic, misdemeanor, and municipal violations courts. The postcards increased case resolution prior to the court date or showing up to court by 5 percentage points. This is the equivalent of lowering the FTA rate by the same amount.

 

Noticeably, in the court dealing with municipal ordinances–the one called General Sessions—the postcards did not cause a statistically significant improvement. However, if we break out the treatment effects by postcard, the first postcard improved case resolution by a bit less than 5 percentage points, a statistically significant effect. We did not have a large enough sample size to confidently detect differences in treatment effects between the postcards.

In our next phase, we are developing new interventions to further increase case resolution rates. To inform these interventions, we are conducting qualitative interviews with defendants about their experiences of receiving and resolving tickets. We hope to understand why people might miss their court dates and what the court could do to help them show up.

Spotlight: Fines and Fees

We at the Lab want to make sure that we are taking on research projects that address the needs of the access to justice field. So, occasionally we’ll post about some pressing A2J issues for which we think more rigorous research is really needed. We welcome your ideas on our thoughts below, of course. For many of these topics, we will be looking for field partners who may have an ideal site for such research. We may also be looking for other researchers with whom we might partner to conduct a study on the topic. If you’re interested in partnering with us on any of these spotlight topics, please let us know.

Last month, Jeff Sessions retracted an Obama-era Justice Department letter that encouraged courts to be wary of the impact of fines and fees on low-income populations. The Obama-era letter brings attention to concerns about the imposition of such monetary penalties, discussing real-world consequences that disproportionately impact poor defendants. These concerns are as follows: the widespread practice of requiring monetary payments for infractions, misdemeanors, or felonies typically does not involve an inquiry into the defendant’s income or ability to pay. Instead, the penalties are based solely on offense type. Such fixed payments are more punitive for poor than the wealthier defendants, as the same fine will present an increasingly larger burden as one moves lower on the income scale. For more information, see this response from Lisa Foster, former director of the Justice Department’s Office for Access to Justice, Access to Justice Lab Advisory Board member, and co-author of the original letter.

Specific examples of such monetary penalties help bring the reality of this into focus[1]:

  • Fine for a misdemeanor is typically about $1,000.
  • Application fee a defendant must pay to hire a public defender can be as high as $400
  • Jail booking fees range from $10-$100
  • Defendants can be made to pay fees upward of $200 for juries who hear their cases
  • Victims’ panel classes, where some defendants are mandated to hear about victims’ experiences and loss, can cost up to $75
  • Drug courts can and often do make people pay for their own assessment, treatment, and frequent drug testing.

For state-by-state information on fines and fees, see the results of a 2014 survey conducted by NPR, NYU’s Brennan Center for Justice, and the National Center for State Courts.

The stakes are high for the individuals in the system, the communities the justice system is meant to protect, and the financial survival of the court system itself. On the latter point, fines and fees can have significant revenue-generating capacity for resource-constrained court budgets. So, how do we structure a system to rehabilitate offenders, decrease recidivism, improve public safety, and stabilize criminal justice budgets? How do we balance all of these priorities, especially in a context in which we don’t actually know what the short and long-term outcomes are of these practices or their alternatives?

It may come as no surprise to our readers that we think that some rigorous research is needed in this arena. We need to know what, exactly, the consequences are to low-income individuals of these fines and fees. We need to know if alternative fee structures–perhaps set based at least in part on a person’s financial status–can effectively reduce recidivism and protect communities. We need to evaluate and understand alternative models, too. As noted in this recent New York Times op-ed, other scholars think the same.

The key is going to be ensuring that policymakers have the research they need to make informed changes. Are you administering a fines and fees-related program that you’d like to evaluate? Contact us to see if it might be a good candidate for a study.

[1] These statistics are replicated from a recent New York Times article, available here: https://www.nytimes.com/2018/01/03/opinion/alternative-justice-fines-prosecutors.html

 

Guest Post: Evaluating Make It Right

Today’s guest post is authored by Katy Weinstein Miller, Chief of Programs & Initiatives at the San Francisco District Attorney’s Office.

In 2013, San Francisco District Attorney George Gascón launched a new approach to handling juvenile delinquency.  Rather than prosecute young people accused of certain felony offenses, the office began offering them the opportunity to participate in “restorative community conferencing” – a facilitated, community-based conversation with the person they harmed, leading to an agreed plan for addressing that harm.  This model, called Make It Right, is an important step for San Francisco and for the field of criminal justice.  At a time when our juvenile caseload is at historic lows but our racial and ethnic disparities are at historic highs, we need new ways to address crime, promote healing, and make our community safer.

The implementation of Make It Right presented an opportunity – and in DA Gascón’s, view, an obligation – to rigorously research the effectiveness of the program through a randomized control trial (RCT).  Our justice system has long operated based on precedent and gut instinct, with little attention to studying results.  While often at odds, prosecutors, defense counsel and judges have shared a reluctance to engage in research that impacts the way they handle their cases.  To be sure, this is understandable for professionals who have been trained to give each case, and each client, individualized consideration.

RCTs present heightened ethical concerns for justice system stakeholders, particularly for diversion programs.  Random assignment requires us to deny the opportunity for some young people, but not others, to avoid prosecution and potentially alter their life course.  Conversely, it denies some victims the established protections of the court system.  Both our defendants and those they have harmed are disproportionately vulnerable populations.  Our gut tells us that restorative models can yield better outcomes than traditional prosecution – for both the young person and the victim – but without research, we just don’t know if that’s true.  The fact that our system disproportionately impacts vulnerable individuals in high stakes situations should underscore, not undercut, the need to employ rigorous methods to determine what works.

While logistical challenges can often derail RCTs in the justice sector, Make It Right’s design makes it well-suited for random assignment.  Our Juvenile Unit Managing Attorney reviews all of San Francisco’s juvenile cases, promoting uniformity in charging decisions and clarity about Make It Right program eligibility.  Following a three-step process, she determines (1) whether the case is chargeable; (2) whether the presenting offense is eligible for the program; and (3) whether the youth is ineligible to participate due to certain factors (such as geographic limitations and prior record/current probation status).   All cases flagged as eligible for the program are forwarded to our Juvenile Division Office Manager, who uses a randomized block method to assign the case to either treatment or control groups.  In each block of 10 cases, 7 are assigned to treatment, and 3 to control.  If case is randomized into the treatment group, our Office Manager directly refers the case to our nonprofit partners, who offer the program to the young person and victim, and facilitate the restorative process.  If the case is randomized into the control group, the Office Manager prepares the charging documents for filing in court.  The randomization process has yielded an unexpected benefit: because our Managing Attorney can only refer cases that she is prepared to prosecute, it ensures that she is not using Make It Right to “widen the net” of young people involved in our justice system – which is often a negative effect of implementing diversion programs.

For us, the hardest part of the Make It Right RCT is waiting for the results.  Preliminary findings are strongly encouraging – but the small scale of Make It Right means it is taking time to yield statistically significant findings.  The patience required to conduct rigorous research stands in direct contrast to our sense of urgency to reform the justice system – but we know that the results of that research will enable all of us to make more meaningful, effective change.

The Make It Right program is a partnership of the San Francisco District Attorney’s Office, nonprofits Community Works West, Huckleberry Youth Programs, and research & innovation center Impact Justice.  The program is under evaluation by the California Policy Lab at the University of California’s Goldman School of Public Policy.

Previewing and Reviewing Pretrial Risk Assessment RCTs

On Tuesday, Jan. 16 pretrial staff in Polk County, Iowa entered their offices with a slightly different charge. They had been accustomed to perusing a list of arrestees scheduled for first appearance and searching for individuals who qualified for an interview and pre-disposition release. That morning, some staff members continued this time- and resource-intensive practice. Others reviewed administrative records and entered nine risk factors into a new software system that calculates (hopefully familiar to readers of this blog) PSA risk scores. Polk County is the first jurisdiction in Iowa to implement the PSA. Three more counties will join them in the coming months as pilot sites, and eventually the entire state will adopt it.

As the A2J Lab looks ahead to launching its second RCT evaluation of the PSA, we came across a study of its progenitor, the Virginia Pretrial Risk Assessment Instrument (“VPRAI”). When the VPRAI arrived in courtrooms around the state, there was no way to convert risk predictions into actionable release recommendations. (That fact stands in stark contrast to the Decision-Making Framework accompanying the PSA.) The solution was the Praxis, “a decision grid that uses the VPRAI risk level and the charge category to determine the appropriate release type and level of supervision.” Virginia pretrial staff also embraced the so-called Strategies for Effective Pretrial Supervision (“STEPS”) program to “shift the focus . . . from conditions compliance to criminogenic needs and eliciting prosocial behavior.” The combination of these innovations, it seemed, would improve Virginia’s ability to pinpoint risk and reduce failure rates during the pre-disposition period.

Marie VanNostrand of Luminosity and two co-authors were interested in understanding, first, the VPRAI’s predictive value. Second, they assessed the benefits of the Praxis and STEPS program through a randomized study design. Unlike the A2J Lab’s field experiments, which usually take individuals as the units of randomization, the Virginia study randomized entire pretrial services offices to one of four conditions: (1) VPRAI only; (2) VPRAI + Praxis; (3) VPRAI + STEPS; and (4) VPRAI + Praxis + STEPS. The authors then used this exogenous (nerd speak for “completely external”) source of variation to analyze staff, judicial, and defendant responses.

The results were quite favorable for the introduction of the Praxis as well as for the VPRAI itself. One estimate suggested that higher VPRAI risk scores correlate strongly with higher actual risk. About two-thirds of the time, if one were to pick two defendants at random–one who failed and one who didn’t–the one who failed would have a higher VPRAI score. Pretrial services staff who had access to the Praxis also responded to its recommendations. Their concurrence (agreement) rate was 80%, and they were over twice as likely to recommend release relative to staff who did not have the decision grid. Next, the availability of the Praxis (versus not having it) was associated with a doubling of the likelihood that judges would release defendants before disposition.

What about defendant outcomes? The authors found that the availability of the Praxis was associated with a lower likelihood of failing to appear or being arrested for a new crime. STEPS alone had no discernible effect.

The VPRAI study suggests a few lessons for our ongoing pretrial risk assessment work, including in Iowa. First, we continue to emphasize that the tool under investigation, the PSA, is far from a cold, lawless automaton, as many commentators seem to worry. Yes, algorithms produce scores, and decision matrices generate recommendations. But human beings must still consider that evidence alongside their own human judgment. One hope is that such evidence will enhance the quality of judges’ decision-making. For now, we just don’t know; that’s the reason for our PSA RCTs. Relatedly, we think that final verdicts on actuarial risk assessments should await reports like the VPRAI study and the A2J Lab’s growing portfolio of evaluations. There will always be local policy issues deserving of debate and attention. However, we need strong evidence for or against these tools’ value before praising or condemning them wholesale. Finally we should, as always, evaluate this brave new world reliably. That means deploying, where possible, principles of experimental design. RCTs, simply put, represent our best shot at understanding causal relationships.

Stay tuned for more updates from Iowa and beyond!

The Evaluation Feedback Project Has Launched

Discussions about the use of performance standards and metrics to measure the quality, effectiveness, and efficiency of legal services have become common in the access to justice community. Increasingly, legal services programs are being asked to use data to communicate about their effectiveness to funders, community stakeholders, and policy makers. And, more importantly, by grounding decisions about legal assistance in evidence-based approaches, we will all be better prepared to determine how best to assist people in need.

Some service providers are responding to this call to implement better evaluation methods by designating an attorney or other administrative staff to manage surveying, the collection and analysis of administrative data, and collaboration with others to conduct needs assessments and impact analyses. However, many do not have a background in program evaluation, and there currently exists no organized, national resource for facilitating collaboration or the sharing of information across legal programs on this topic.

The access to justice community can do a lot to collaborate rather than each program reinventing the evaluation wheel. To facilitate the sharing of knowledge and expertise in an effort to grow evaluation capacity among our peers, the A2J Lab has partnered with  Rachel Perry (Principal, Strategic Data Analytics) and Kelly Shaw-Sutherland (Manager Research and Evaluation, Legal Aid of Nebraska) to launch a project that seeks to match programs that are working to develop evaluation instruments (e.g., client surveys, interview and focus group protocols, etc.) with experts who volunteer to provide feedback on the design of these tools. The volunteers are our own peers from the field who have done work in this arena, as well as a network of trained evaluation experts, many of whom have experience with evaluation in other fields.

Here’s how the project works:
1. A program or individual submits an evaluation tool for feedback;
2. We determine if the submission falls within the scope of this project;
3. We match the submission with 1-3 evaluators from a volunteer database;
4. Volunteers review the evaluation tool and provide feedback to the original submitter.

A secondary goal of this project is to create more of a community of data and evaluation oriented folks within the access to justice world. So, we encourage all of you to get involved! Check out the project page to learn more, submit an evaluation instrument to receive feedback, or volunteer to provide feedback to other programs working on developing evaluation tools.

Happy New Year!

Happy (nearly) New Year from all of us here at the Lab!

We’re excited for all we accomplished in 2017. This past year, we’ve seen the Lab grow in size and impact.

We now have over 6,360 participants enrolled in the Lab’s evaluations. We’re collaborating with 38 partners, including court systems, legal aid organizations, and other academic institutions. Over 75 student team members, along with our staff, have developed over 1,850 pages of self-help materials, as well as two digital self-help tools, to test for efficacy as we seek to learn the best way to help pro se defendants.

As the Lab runs more and more studies, our impact increases—and so do our costs.

In 2018, we’re hoping to double the number of studies we have in the field, but we can’t do it without your support.

If you’re thinking about making any final gifts in 2017, would you consider making a contribution to help the Lab continue to learn the best ways to help people with legal problems? Your gift will be put to immediate use in support of the Lab’s mission.

We look forward to sharing more news of our work in 2018!

The Ethics of Randomization

We get a lot of questions about the ethics of randomized control trials in the legal profession. The questions from our potential study partners go something like this: Is it ethical to let something other than our professional judgment determine who gets services and who doesn’t, even if it is for the sake of research? And, is it ethical to conduct such research when the stakes can be so high for our study participants?

We have answers.

First of all, we take very seriously our commitment to research that meets not only the standards set by Harvard University’s Committee on the Use of Human Subjects, but also a broader set of ethical norms from other fields conducting research on human subjects.

But we get it. Many people have ethical concerns that go beyond these standards. The study participants are often particularly vulnerable populations and many legal service providers are in this profession specifically to help people in need. Many of these cases involve life events for which the stakes are high: safety, shelter, health, and so forth. Access to legal services in many cases can end up being a critical lifeline.

So let’s talk about when the conditions are right for ethical randomization in the legal services context.

One important contextual factor within which most of the A2J Lab’s studies exist is that of resource scarcity. You are all likely acutely aware of the tenuous resources in the legal services world: federal and state funding levels cause significant anxiety every year, Interest on Lawyer Trust Accounts continue to decline, and courts are facing budget cuts that are driving some of them to reduce hours of operation. The unmet legal needs are significant. Stanford Law professor Deborah Rhode estimates that 4/5ths of the civil legal needs of the poor remain unmet.[1] Legal Services Corporation estimates that 85% of the civil legal problems faced by low-income Americans receive inadequate or no legal help.[2]

In this resource-constrained context we cannot provide services to everyone, and some mechanism must determine who receives services and who does not. This mechanism might involve a human making triage determinations based on her professional judgment, or it might involve the distribution of resources based on a first-come/first-served basis. It also might look like a lottery, where some impartial system allocates resources and determines service recipients randomly. The point is, we are already unable to provide services to everyone. A lottery is one of several options we have for making such determinations, and, in cases where services are already distributed by lottery, is one we already use. So it is frequently ethical to use a lottery to allocate the scarce resource (certain kinds of legal help, court-sponsored mediation, etc.) that one of our studies seeks to evaluate.

The other important factor is equipoise. Essentially, the concept of equipoise means that we do not already know whether the way we are allocating resources or providing services is the most effective way. Because we have no established tradition in the law for conducting rigorous research on effectiveness, we have subsisted for years based on policy preferences, professional judgments, or educated guesses rather than evidence. Thus, in all of our studies, we are operating in a state of profound uncertainty that justifies the use of a lottery (randomization) to find out what works.

Consider the chart below to think about how both equipoise and resource scarcity interact. The left column shows possible positive outcomes for a person working to solve a legal issue with some less expensive form of legal assistance, say, self-help materials. The right column shows possible positive outcomes for that person were she to receive an expensive form of legal assistance, say, a traditional attorney-client relationship. The third line (in which a higher level of assistance cause a worse outcome) appears in strike-through because, we surmise, it happens to infrequently that it is safe to ignore. Specifically, note that in some of these hypotheticals, legal assistance changes the outcome and, in other hypotheticals, the legal intervention does not make a difference.

Note: it’s really hard to tell into which line people belong. We can make educated guesses, and research can try to help us make good ex ante predictions. But it’s hard to tell in advance.

For now, though, suppose we did know what would happen to a particular person if we gave her self-help and what would happen to her if we gave her a traditional attorney-client relationship. In a resource-rich environment, one might simply look at whether this client experiences positive outcomes after receiving maximal legal assistance and stop there. In that case, we could provide an attorney-client relationship to anyone in line numbers 1 and 2. When resources are scarce, however, it becomes important to determine whether, when, and how those resources are really making a difference. The first row of the chart provides the ideal scenario for legal services: a person who would not succeed without legal assistance. The other rows present cases for which an investment of scarce dollars is not ideal: a person who would have succeeded even without maximal legal assistance (row 2), or a person who will not succeed even with self-help only (row 4). Every dollar spent on a case that falls into one of the scenarios described in rows 2 or4 is a dollar not invested in one of the scenarios in row 1.

  Positive Outcome for Person Receiving Self-Help Positive Outcome for Person Receiving an Attorney-Client Relationship
1 No Yes
2 Yes Yes
3 Yes No
4 No No

The problem is, because of the lack of research-based information in the law, we don’t actually know how to identify which clients or cases will fall into which rows. Now also imagine there are multiple columns showing different levels of legal assistance, methods for delivering such services, and types of cases. Then, it gets more complicated.  That’s what we need rigorous research for.

And really, if you think about it, the fact that the stakes can be high in the law generally is a reason to randomize and test, not to avoid doing so. When stakes are high, we should insist on rigorous evidence of effectiveness, not guesswork.

[1] Rhode, Access to Justice, Fordham Law Review

[2] The Legal Services Corporation, The Justice Gap: Measuring the Unmet Civil Legal Needs of low-income Americans, June 2017, https://www.lsc.gov/sites/default/files/images/TheJusticeGap-FullReport.pdf

More information on our Default Part II study in four graphs

We’ve been working on some new data representations for our Problem of Default Part II study, which is now in the field in Boston. This Part II study doesn’t have its own non-intervention control group (meaning, all of the groups we’re evaluating are receiving some sort of intervention). This is because Part I already demonstrated that even limited intervention has a statistically significant effect on defendants’ answer and appearance rates compared with no intervention. Part II seeks to build on that knowledge by testing whether some interventions are more effective than others.

That said, we always like to be as thorough as possible as we design our studies. To that end, before we launched Part II, we did some analysis of existing court case data for all small claims cases filed in 2016 to gather some baseline information. We’ve created four graphs, now live on a new study web page. (If you haven’t seen the study volume tracker, that’s worth a look as well.)

The graphs contain a lot of information, and, if you’re not familiar with statistics or the intricacies of programs available in Massachusetts courts, they might be a little difficult to read.

Before we drill into an example, we have a few notes on the definitions of the different variables. One variable is whether or not a hearing for a case was scheduled on a Lawyer for the Day (LFD) program day. The Massachusetts Lawyer for the Day program is a pro bono legal service that provides some pro-se advising services in some courts on certain days of the week. Exact services and availability varies between courts. Another is whether a defendant fails to appear (FTAs) at a given hearing.[1] The graphs break down data between these two variables at different courts in four different ways:

  • If a defendant ever failed to appear (FTA’d) at any hearing that was held
  • If a defendant failed to appear at their first hearing that was held
  • If the defendant’s first scheduled hearing was scheduled on a day when the Lawyer for the Day (LFD) program was happening at the court and the defendant appeared at that scheduled hearing
  • If any of the defendant’s scheduled hearings were scheduled on a day when the LFD program was happening at the court and the defendant appeared at one or more such scheduled hearings

Let’s take a look at an example data point:

In this example, the circled dot is the proportion of study ineligible (noted by color) cases in Cambridge Small Claims Court (y-axis). The dot’s size shows that the number of cases it represents comprises about .4 of the total cases in the court, which in this case would be around 325 cases (.4 of the court’s total number of cases in the sample, 811).

The dot shows us that in almost 25% of the study ineligible cases in Cambridge Small Claims Court, the first hearing was scheduled on a Lawyer for the Day program weekday and the defendant appeared at that hearing.

Our hope is that these graphs, along with the frequently updated study volume information, provide a window into the study’s design and progress as we move forward. Look for more updates on data from this and our other studies in early 2018.

[1] In Boston Municipal Court (Civil), the defendant FTAs if the defendant does not file an answer or does not appear at the first hearing; the defendant does not FTA if the defendant does both of those things.