• Skip to primary navigation
  • Skip to main content
Access to Justice Lab logo

The Access to Justice Lab

at Harvard Law School

  • About
  • Projects
  • Publications
  • Resources
  • People
    • A2J Lab Staff
    • Advisory Board
    • Collaborators
  • Contact
  • Donate
  • Show Search
Hide Search

April Faith-Slaker

RCTs and Other Evaluation Methods

May 31, 2018 by April Faith-Slaker

With so many types of assessments, evaluation designs, and research methods out there, it can often be really confusing for non-profits to figure out which approach to take when assessing the effectiveness of their programs. The A2J Lab has certainly spent some time talking about the value of the randomized control trial method, but how this method fits in with the other methods out there might not always be super clear. So, we’ve put together a chart that briefly explains some of the most essential evaluation methods, describes when to use them, explains their strengths and limitations, and provides examples of some studies and reports for each method.

The chart can be viewed and downloaded here. We have also created versions of the chart that integrate a technology example scenario and a pro bono innovation example scenario.

A few brief points about how to think about the chart:

  1. Define your research question. Be really clear about what the original research question is that you’re asking and answering. The different methodologies explained in the chart answer different types of questions. Be sure to check out the first column to identify your research question and then determine which methodology will best answer that question.
  2. Know what you know and don’t know. Be really clear about what you can know and what you aren’t able to know when using a particular methodology. Refer to the columns labeled “usefulness” and “limitations” in the chart to understand what each approach will be able to tell you or not tell you. When you report on the results, it is important to not extend the results beyond what you actually are able to know based on the methodology.
  3. Consider multiple methods. Consider using more than one type of evaluation method to make sure you have all of your bases covered. In fact, when we conduct randomized control trials, we often support our studies with other methodologies as well.

Thanks to the Self Represented Litigation Network Research Working Group and to David Udell at the Nation Center for Access to Justice for providing feedback on this chart. If you have any questions about the chart or the various methods explained here, feel free to email April at afaithslaker[at]law.harvard.edu

Spotlight: Drug Courts

February 6, 2018 by April Faith-Slaker

According to the National Institute of Justice, there are more than 3,000 drug courts across the United States.[1] As designed, these courts aim to rehabilitate drug-addicted offenders, reduce recidivism, and potentially reduce costs to the system through lower incarceration rates. Yet only a handful of studies have implemented a randomization design to rigorously evaluate the effectiveness of such programs. Given that the results are not yet entirely clear due to variability in program design and research methodologies, the A2J Lab thinks this would be a great area of continued, rigorous research.

First, what exactly is a drug court? Drug courts are specialized treatment courts that offer monitored treatment, drug testing, and other services to offenders, with intensive court supervision as a key component. The treatment programs usually involve counseling, therapy and education, drug tests, and other services that target vocational, educational, family, health, and other issues. Meanwhile, intensive court supervision encourages compliance through status hearings and a system of rewards and sanctions. It is also believed that the intense relationship with the judge increases offender perceptions of procedural justice, which is thought to provide an additional deterrent to crime through trust and confidence in the judicial system.

To date, four randomized studies of drug courts have taken place (see this chart for details), and together they suggest that there may be some positive effects of drug court participation on drug use and recidivism, although it is quite unclear how long these effects last after the offenders complete the programs. Additionally, variation in the programs themselves as well as the outcomes collected make it hard to understand what to make of the findings. Notably, the studies vary in terms of the timing of the intervention (pretrial, post-adjudication, post-conviction), participant eligibility, comparison groups, program length, and timing of outcome collection, as well as the definition of recidivism.

Broadly speaking, the studies suggest:

  • An Arizona study of a post-adjudication drug court involving probationers with a first time felony conviction for drug possession found lower re-arrest rates and technical violations among drug court participants when measured 36 months after the randomization. When measured 12 months after randomization, the re-arrest rate differences were not found and the technical violation differences were only found for drug-related technical violations.
  • A Washington, D.C. study of a pretrial drug court program involving drug felony defendants found reduced drug use during the program (within 12 months of randomization), though this effect was not sustained for the year after sentencing. This study did not reveal significant differences in re-arrest rates for the drug court program compared to the standard docket a year after sentencing.
  • A Maryland study of a post-conviction drug court program involving drug-involved, non-violent offenders found that drug court participants reported using fewer types of drugs than the control group 36 months after the randomization and were less likely to be rearrested during the program and within 24 months of randomization. By 36 months after the randomization, 3/4ths of the study participants had been re-arrested regardless of their group assignment.
  • An Oregon study of a post-adjudication drug court program involving offenders on probation, parole, and post-prison supervision found that 12 months after randomization the drug court group had fewer new charges, including fewer new drug-specific charges than the control group.

Given the prevalence of drug courts and the lack of certainty regarding whether they are achieving their intended goals, the A2J Lab would like to see more randomized studies in this arena. And, specifically, we think there is a need for carefully describing, defining, and measuring the processes and activities that occur in drug courts, what the key ingredients are, and how to determine the short and long term outcomes on offenders and public safety. Do you work with or within a drug court? What are you hoping it accomplishes? Would you be interested in finding out whether it achieves those aims?  The A2J Lab would be happy to work with interested field partners or researchers to tackle this topic!

[1] See https://www.nij.gov/topics/courts/drug-courts/pages/welcome.aspx

 

Drug Court RCTs

Although this chart can’t possibly cover all of the important nuances of these studies, it will give you a general sense of the key features of the data collections, study design, and results.

Study Location Offenders Eligibility Program Description Randomization Findings
Maricopa County, AZ[1] Probationers with a first time felony conviction for drug possession Post-Adjudication, rewards include reduction in probation time and fees Randomized to drug court or to routine probation (3 tracks with varied intensity) 1. Drug use: no significant differences
2. Re-arrest: 12 months after randomization, there were no differences between drug court participants and routine probation. At 36 months, however, there were lower re-arrest rates for the drug court group compared to routine probation: drug court participants averaged 2 violations compared to 3 for the routine probation group.
3. Drug-related violations: lower violation rates for treatment group compared to probation tracks at 12 months and 36 months. At 12 months, 10% of the drug court group and 26% of routine probationers had received a drug-related technical violation. At 36 months, this effect was seen for all types of technical violations (not limited to drug-related): 64% of the drug court group and 75.2% of the probationers had received a technical violation.
Washington, D.C. Superior Court[2] Drug felony defendants (not limited to addicts) Pretrial, rewards include reductions to severity of criminal penalties Randomized to drug court, sanctions docket or standard docket 1. Drug use: both the drug court and the sanctions docket exhibited reduced drug use, measured 12 months after randomization, compared to the standard docket. Specifically, 17% of the drug court group and 21% of the sanctions group, compared to 11% of the standard docket tested drug free 12 months after randomization. When asked about their drug use during the year after sentencing, there were no significant differences between the groups, however.
2. Re-arrest: the sanctions group was less likely than standard docket to report having been arrested in year following sentencing: 19% of the sanctions group compared to 27% of the standard docket were re-arrested. This difference was not observed for drug court vs standard docket, however.
3. Other Social Outcomes: no differences overall across a range of social and economic outcomes. Drug court participants were less likely to have accidents with vehicle while under the influence.
Baltimore, MD[3] Drug-involved, non-violent offenders.

 

Referred from one of two tracks: 1) circuit court felony cases supervised by parole and probation and 2) district court misdemeanor cases supervised by parole and probation.

Post-conviction, such that successful completion of the program would result in dropped charges. Randomization to drug court vs control (treatment as usual) 1. Self-Reported Drug use: drug court group self-reported using fewer types of drugs used than control group, 3 years after randomization. Specifically, drug court participants scored lower on the drug variety scale compared to control group (0.14 vs. 0.18) and lower on an alcohol addiction scale as well (1.2 vs. 1.4).
2. Re-arrest: drug court clients were less likely to be rearrested during the program and at 24 months. Within 12 months of the randomization, 48% of the drug court group compared to 64% of the control group were arrested for new offenses. At 24 months following the randomization, 66.2% of the drug court group and 81.3% of the control group were arrested for new offenses.  At 36 months, the difference between the groups was no longer significant; by this point 3/4ths had been re-arrested regardless of their assigned group.Recidivism was found to be lowest among subjects who participate at higher levels in certified drug treatment, status hearings, and drug testing.
3. Time to re-arrest: drug court group had longer time to first re-arrest than control (adjusted for incarceration time). The two groups were essentially identical for the first 4 months, after which the drug court program had a greater impact.
4. Incarceration: no differences in number of days incarcerated, measured at 36 months.
5. Self-reported criminal activity: Drug court group self-reported less involvement in criminal activity than control group 3 years after randomization.
6. Drug-related violations: drug court group was significantly less likely to have been arrested for a drug offense compared to control, measured at 36 months from randomization. Specifically, 55.5% of the drug court group and 68.4% of the control group had a drug charge at 36 months.
7. Other Social Outcomes: drug court group was less likely than control group to be on welfare rolls, 36 months after randomization. Specifically, 4.3% of the drug court group compared to 10.9% of the control group were receiving money from welfare 36 months after the randomization. There were few differences between groups for the other social outcomes measured.
Douglas, Jackson, Multnomah, and Umatilla Counties, OR[4] Offenders convicted of Measure 57 Offenses (felony property or repeat drug delivery offense + 3+ on Texas Christian University drug screen) Post-adjudication, offenders on probation, parole, and post-prison supervision Randomized to drug court vs control (probation) 1. Drug use: at 18 month (post-randomization) follow-up interview, there were no significant differences of self-reported drug use within 30 days of the interview, with one exception: a larger proportion of drug court participants (42% at random assignment and 16% at follow up) relative to control participants (17% and 0%) reported having used marijuana within the most recent 30 days. Across all drug categories except injection drugs, the percentage of users decreased significantly at follow up for both groups, suggesting that court involvement generally may be related to reductions in substance use.
2. New charges: Drug court group had fewer new charges than control group within 1 year of randomization. Specifically, the drug court group had 28% fewer new charges and 26% fewer new cases than the control group.
3. Drug-related charges: Drug court group had 33% fewer new drug charges than the control group within 1 year of randomization.

[1] Deschenes, Elizabeth Piper; Susan Turner and Peter W. Greenwood (1995) “Drug Court or Probation?: An Experimental Evaluation of Maricopa County’s Drug Court” The Justice System Journal, Volume 18, Number 1: 55-73; Turner, Susan et al (2002) A Decade of Drug treatment Court Research. Substance use & Misuse, Vol 37, No 12&13: 1489-1527

[2] Harrell, A.V., Cavanagh, S., & Roman, J. (1998) Final report: Findings from the evaluation of the D.C. Superior Court Drug Intervention Program. Washington, DC: The Urban Institute

[3] Banks, Duren and Denise C Gottfredson (2004) “Participation in Drug Treatment Court and Time to Arrest” Justice Quarterly, Volume 21, Number 3: 637-658; Gottfredson, D.C & Exum, M.L (2002) The Baltimore city Drug Treatment court: One-Year Results form a Randomized Study. Journal of Research in Crime and Delinquency. Vol 39: 337-356; Gottfredson, D.C., Najaka, S.S., & Kearley, B.W. (2003) Effectiveness of Drug Treatment Courts: Evidence From a Randomized Trial. Criminology and Public Policy. Vol 2: 171-196; Gottfredson, D.C., Kearley, B.W., Najaka, S.S., & Rocha, C.M (2005) The Baltimore City Drug Treatment court: three Year Self-Report Outcome Study. Evaluation Review. Vol 29: 42-64; Gottfredson, D.C., Najaka, S.S., Kearley, B.W., & Rocha, C.M (2006) Long-Term Effects of Participation in the Baltimore City Drug Treatment Court: Results from an Experimental Study. Journal of Experimental Criminology. Vol 2; 67-98; Gottfredson, D.C., Kearley, Brook W., Najaka, S.S., Rocha, C.M (2007) How Drug Treatment courts Work: An Analysis of mediators. Journal of Research in Crime and Delinquency. Vol 44, Issue 3; 3-35; Gottfredson, D.C., Kearley, B. W., Brook, W. & ushway, S.D. (2008) Substance Use, Drug Treatment, and Crime: An Examination of Intra-Individual Variation in a Drug Court Population. Journal of Drug Issues. Vol 38, Issue 2. Pp 601.

[4] Oregon Criminal Justice Commission and NPC Research (2015) Randomized Controlled Trial of Measure 57 Intensive Drug Court for Medium- to High-Risk Property Offenders. https://ndcrc.org/resource/randomized-controlled-trial-of-measure-57-intensive-drug-court-for-medium-to-high-risk-property-offenders/

Spotlight: Fines and Fees

January 27, 2018 by April Faith-Slaker

We at the Lab want to make sure that we are taking on research projects that address the needs of the access to justice field. So, occasionally we’ll post about some pressing A2J issues for which we think more rigorous research is really needed. We welcome your ideas on our thoughts below, of course. For many of these topics, we will be looking for field partners who may have an ideal site for such research. We may also be looking for other researchers with whom we might partner to conduct a study on the topic. If you’re interested in partnering with us on any of these spotlight topics, please let us know.

Last month, Jeff Sessions retracted an Obama-era Justice Department letter that encouraged courts to be wary of the impact of fines and fees on low-income populations. The Obama-era letter brings attention to concerns about the imposition of such monetary penalties, discussing real-world consequences that disproportionately impact poor defendants. These concerns are as follows: the widespread practice of requiring monetary payments for infractions, misdemeanors, or felonies typically does not involve an inquiry into the defendant’s income or ability to pay. Instead, the penalties are based solely on offense type. Such fixed payments are more punitive for poor than the wealthier defendants, as the same fine will present an increasingly larger burden as one moves lower on the income scale. For more information, see this response from Lisa Foster, former director of the Justice Department’s Office for Access to Justice, Access to Justice Lab Advisory Board member, and co-author of the original letter.

Specific examples of such monetary penalties help bring the reality of this into focus[1]:

  • Fine for a misdemeanor is typically about $1,000.
  • Application fee a defendant must pay to hire a public defender can be as high as $400
  • Jail booking fees range from $10-$100
  • Defendants can be made to pay fees upward of $200 for juries who hear their cases
  • Victims’ panel classes, where some defendants are mandated to hear about victims’ experiences and loss, can cost up to $75
  • Drug courts can and often do make people pay for their own assessment, treatment, and frequent drug testing.

For state-by-state information on fines and fees, see the results of a 2014 survey conducted by NPR, NYU’s Brennan Center for Justice, and the National Center for State Courts.

The stakes are high for the individuals in the system, the communities the justice system is meant to protect, and the financial survival of the court system itself. On the latter point, fines and fees can have significant revenue-generating capacity for resource-constrained court budgets. So, how do we structure a system to rehabilitate offenders, decrease recidivism, improve public safety, and stabilize criminal justice budgets? How do we balance all of these priorities, especially in a context in which we don’t actually know what the short and long-term outcomes are of these practices or their alternatives?

It may come as no surprise to our readers that we think that some rigorous research is needed in this arena. We need to know what, exactly, the consequences are to low-income individuals of these fines and fees. We need to know if alternative fee structures–perhaps set based at least in part on a person’s financial status–can effectively reduce recidivism and protect communities. We need to evaluate and understand alternative models, too. As noted in this recent New York Times op-ed, other scholars think the same.

The key is going to be ensuring that policymakers have the research they need to make informed changes. Are you administering a fines and fees-related program that you’d like to evaluate? Contact us to see if it might be a good candidate for a study.

[1] These statistics are replicated from a recent New York Times article, available here: https://www.nytimes.com/2018/01/03/opinion/alternative-justice-fines-prosecutors.html

The Evaluation Feedback Project Has Launched

January 4, 2018 by April Faith-Slaker

Discussions about the use of performance standards and metrics to measure the quality, effectiveness, and efficiency of legal services have become common in the access to justice community. Increasingly, legal services programs are being asked to use data to communicate about their effectiveness to funders, community stakeholders, and policy makers. And, more importantly, by grounding decisions about legal assistance in evidence-based approaches, we will all be better prepared to determine how best to assist people in need.

Some service providers are responding to this call to implement better evaluation methods by designating an attorney or other administrative staff to manage surveying, the collection and analysis of administrative data, and collaboration with others to conduct needs assessments and impact analyses. However, many do not have a background in program evaluation, and there currently exists no organized, national resource for facilitating collaboration or the sharing of information across legal programs on this topic.

The access to justice community can do a lot to collaborate rather than each program reinventing the evaluation wheel. To facilitate the sharing of knowledge and expertise in an effort to grow evaluation capacity among our peers, the A2J Lab has partnered with  Rachel Perry (Principal, Strategic Data Analytics) and Kelly Shaw-Sutherland (Manager Research and Evaluation, Legal Aid of Nebraska) to launch a project that seeks to match programs that are working to develop evaluation instruments (e.g., client surveys, interview and focus group protocols, etc.) with experts who volunteer to provide feedback on the design of these tools. The volunteers are our own peers from the field who have done work in this arena, as well as a network of trained evaluation experts, many of whom have experience with evaluation in other fields.

Here’s how the project works:
1. A program or individual submits an evaluation tool for feedback;
2. We determine if the submission falls within the scope of this project;
3. We match the submission with 1-3 evaluators from a volunteer database;
4. Volunteers review the evaluation tool and provide feedback to the original submitter.

A secondary goal of this project is to create more of a community of data and evaluation oriented folks within the access to justice world. So, we encourage all of you to get involved! Check out the project page to learn more, submit an evaluation instrument to receive feedback, or volunteer to provide feedback to other programs working on developing evaluation tools.

The Ethics of Randomization

December 27, 2017 by April Faith-Slaker

We get a lot of questions about the ethics of randomized control trials in the legal profession. The questions from our potential study partners go something like this: Is it ethical to let something other than our professional judgment determine who gets services and who doesn’t, even if it is for the sake of research? And, is it ethical to conduct such research when the stakes can be so high for our study participants?

We have answers.

First of all, we take very seriously our commitment to research that meets not only the standards set by Harvard University’s Committee on the Use of Human Subjects, but also a broader set of ethical norms from other fields conducting research on human subjects.

But we get it. Many people have ethical concerns that go beyond these standards. The study participants are often particularly vulnerable populations and many legal service providers are in this profession specifically to help people in need. Many of these cases involve life events for which the stakes are high: safety, shelter, health, and so forth. Access to legal services in many cases can end up being a critical lifeline.

So let’s talk about when the conditions are right for ethical randomization in the legal services context.

One important contextual factor within which most of the A2J Lab’s studies exist is that of resource scarcity. You are all likely acutely aware of the tenuous resources in the legal services world: federal and state funding levels cause significant anxiety every year, Interest on Lawyer Trust Accounts continue to decline, and courts are facing budget cuts that are driving some of them to reduce hours of operation. The unmet legal needs are significant. Stanford Law professor Deborah Rhode estimates that 4/5ths of the civil legal needs of the poor remain unmet.[1] Legal Services Corporation estimates that 85% of the civil legal problems faced by low-income Americans receive inadequate or no legal help.[2]

In this resource-constrained context we cannot provide services to everyone, and some mechanism must determine who receives services and who does not. This mechanism might involve a human making triage determinations based on her professional judgment, or it might involve the distribution of resources based on a first-come/first-served basis. It also might look like a lottery, where some impartial system allocates resources and determines service recipients randomly. The point is, we are already unable to provide services to everyone. A lottery is one of several options we have for making such determinations, and, in cases where services are already distributed by lottery, is one we already use. So it is frequently ethical to use a lottery to allocate the scarce resource (certain kinds of legal help, court-sponsored mediation, etc.) that one of our studies seeks to evaluate.

The other important factor is equipoise. Essentially, the concept of equipoise means that we do not already know whether the way we are allocating resources or providing services is the most effective way. Because we have no established tradition in the law for conducting rigorous research on effectiveness, we have subsisted for years based on policy preferences, professional judgments, or educated guesses rather than evidence. Thus, in all of our studies, we are operating in a state of profound uncertainty that justifies the use of a lottery (randomization) to find out what works.

Consider the chart below to think about how both equipoise and resource scarcity interact. The left column shows possible positive outcomes for a person working to solve a legal issue with some less expensive form of legal assistance, say, self-help materials. The right column shows possible positive outcomes for that person were she to receive an expensive form of legal assistance, say, a traditional attorney-client relationship. The third line (in which a higher level of assistance cause a worse outcome) appears in strike-through because, we surmise, it happens to infrequently that it is safe to ignore. Specifically, note that in some of these hypotheticals, legal assistance changes the outcome and, in other hypotheticals, the legal intervention does not make a difference.

Note: it’s really hard to tell into which line people belong. We can make educated guesses, and research can try to help us make good ex ante predictions. But it’s hard to tell in advance.

For now, though, suppose we did know what would happen to a particular person if we gave her self-help and what would happen to her if we gave her a traditional attorney-client relationship. In a resource-rich environment, one might simply look at whether this client experiences positive outcomes after receiving maximal legal assistance and stop there. In that case, we could provide an attorney-client relationship to anyone in line numbers 1 and 2. When resources are scarce, however, it becomes important to determine whether, when, and how those resources are really making a difference. The first row of the chart provides the ideal scenario for legal services: a person who would not succeed without legal assistance. The other rows present cases for which an investment of scarce dollars is not ideal: a person who would have succeeded even without maximal legal assistance (row 2), or a person who will not succeed even with self-help only (row 4). Every dollar spent on a case that falls into one of the scenarios described in rows 2 or4 is a dollar not invested in one of the scenarios in row 1.

  Positive Outcome for Person Receiving Self-Help Positive Outcome for Person Receiving an Attorney-Client Relationship
1 No Yes
2 Yes Yes
3 Yes No
4 No No

The problem is, because of the lack of research-based information in the law, we don’t actually know how to identify which clients or cases will fall into which rows. Now also imagine there are multiple columns showing different levels of legal assistance, methods for delivering such services, and types of cases. Then, it gets more complicated.  That’s what we need rigorous research for.

And really, if you think about it, the fact that the stakes can be high in the law generally is a reason to randomize and test, not to avoid doing so. When stakes are high, we should insist on rigorous evidence of effectiveness, not guesswork.

[1] Rhode, Access to Justice, Fordham Law Review

[2] The Legal Services Corporation, The Justice Gap: Measuring the Unmet Civil Legal Needs of low-income Americans, June 2017, https://www.lsc.gov/sites/default/files/images/TheJusticeGap-FullReport.pdf

Using empirical research to make the U.S. justice system work better for everyone.

The Access to Justice Lab

Copyright © 2023 · Monochrome Pro on Genesis Framework · WordPress · Log in

 

Loading Comments...