The Evaluation Feedback Project Has Launched

Discussions about the use of performance standards and metrics to measure the quality, effectiveness, and efficiency of legal services have become common in the access to justice community. Increasingly, legal services programs are being asked to use data to communicate about their effectiveness to funders, community stakeholders, and policy makers. And, more importantly, by grounding decisions about legal assistance in evidence-based approaches, we will all be better prepared to determine how best to assist people in need.

Some service providers are responding to this call to implement better evaluation methods by designating an attorney or other administrative staff to manage surveying, the collection and analysis of administrative data, and collaboration with others to conduct needs assessments and impact analyses. However, many do not have a background in program evaluation, and there currently exists no organized, national resource for facilitating collaboration or the sharing of information across legal programs on this topic.

The access to justice community can do a lot to collaborate rather than each program reinventing the evaluation wheel. To facilitate the sharing of knowledge and expertise in an effort to grow evaluation capacity among our peers, the A2J Lab has partnered with  Rachel Perry (Principal, Strategic Data Analytics) and Kelly Shaw-Sutherland (Manager Research and Evaluation, Legal Aid of Nebraska) to launch a project that seeks to match programs that are working to develop evaluation instruments (e.g., client surveys, interview and focus group protocols, etc.) with experts who volunteer to provide feedback on the design of these tools. The volunteers are our own peers from the field who have done work in this arena, as well as a network of trained evaluation experts, many of whom have experience with evaluation in other fields.

Here’s how the project works:
1. A program or individual submits an evaluation tool for feedback;
2. We determine if the submission falls within the scope of this project;
3. We match the submission with 1-3 evaluators from a volunteer database;
4. Volunteers review the evaluation tool and provide feedback to the original submitter.

A secondary goal of this project is to create more of a community of data and evaluation oriented folks within the access to justice world. So, we encourage all of you to get involved! Check out the project page to learn more, submit an evaluation instrument to receive feedback, or volunteer to provide feedback to other programs working on developing evaluation tools.

The Ethics of Randomization

We get a lot of questions about the ethics of randomized control trials in the legal profession. The questions from our potential study partners go something like this: Is it ethical to let something other than our professional judgment determine who gets services and who doesn’t, even if it is for the sake of research? And, is it ethical to conduct such research when the stakes can be so high for our study participants?

We have answers.

First of all, we take very seriously our commitment to research that meets not only the standards set by Harvard University’s Committee on the Use of Human Subjects, but also a broader set of ethical norms from other fields conducting research on human subjects.

But we get it. Many people have ethical concerns that go beyond these standards. The study participants are often particularly vulnerable populations and many legal service providers are in this profession specifically to help people in need. Many of these cases involve life events for which the stakes are high: safety, shelter, health, and so forth. Access to legal services in many cases can end up being a critical lifeline.

So let’s talk about when the conditions are right for ethical randomization in the legal services context.

One important contextual factor within which most of the A2J Lab’s studies exist is that of resource scarcity. You are all likely acutely aware of the tenuous resources in the legal services world: federal and state funding levels cause significant anxiety every year, Interest on Lawyer Trust Accounts continue to decline, and courts are facing budget cuts that are driving some of them to reduce hours of operation. The unmet legal needs are significant. Stanford Law professor Deborah Rhode estimates that 4/5ths of the civil legal needs of the poor remain unmet.[1] Legal Services Corporation estimates that 85% of the civil legal problems faced by low-income Americans receive inadequate or no legal help.[2]

In this resource-constrained context we cannot provide services to everyone, and some mechanism must determine who receives services and who does not. This mechanism might involve a human making triage determinations based on her professional judgment, or it might involve the distribution of resources based on a first-come/first-served basis. It also might look like a lottery, where some impartial system allocates resources and determines service recipients randomly. The point is, we are already unable to provide services to everyone. A lottery is one of several options we have for making such determinations, and, in cases where services are already distributed by lottery, is one we already use. So it is frequently ethical to use a lottery to allocate the scarce resource (certain kinds of legal help, court-sponsored mediation, etc.) that one of our studies seeks to evaluate.

The other important factor is equipoise. Essentially, the concept of equipoise means that we do not already know whether the way we are allocating resources or providing services is the most effective way. Because we have no established tradition in the law for conducting rigorous research on effectiveness, we have subsisted for years based on policy preferences, professional judgments, or educated guesses rather than evidence. Thus, in all of our studies, we are operating in a state of profound uncertainty that justifies the use of a lottery (randomization) to find out what works.

Consider the chart below to think about how both equipoise and resource scarcity interact. The left column shows possible positive outcomes for a person working to solve a legal issue with some less expensive form of legal assistance, say, self-help materials. The right column shows possible positive outcomes for that person were she to receive an expensive form of legal assistance, say, a traditional attorney-client relationship. The third line (in which a higher level of assistance cause a worse outcome) appears in strike-through because, we surmise, it happens to infrequently that it is safe to ignore. Specifically, note that in some of these hypotheticals, legal assistance changes the outcome and, in other hypotheticals, the legal intervention does not make a difference.

Note: it’s really hard to tell into which line people belong. We can make educated guesses, and research can try to help us make good ex ante predictions. But it’s hard to tell in advance.

For now, though, suppose we did know what would happen to a particular person if we gave her self-help and what would happen to her if we gave her a traditional attorney-client relationship. In a resource-rich environment, one might simply look at whether this client experiences positive outcomes after receiving maximal legal assistance and stop there. In that case, we could provide an attorney-client relationship to anyone in line numbers 1 and 2. When resources are scarce, however, it becomes important to determine whether, when, and how those resources are really making a difference. The first row of the chart provides the ideal scenario for legal services: a person who would not succeed without legal assistance. The other rows present cases for which an investment of scarce dollars is not ideal: a person who would have succeeded even without maximal legal assistance (row 2), or a person who will not succeed even with self-help only (row 4). Every dollar spent on a case that falls into one of the scenarios described in rows 2 or4 is a dollar not invested in one of the scenarios in row 1.

  Positive Outcome for Person Receiving Self-Help Positive Outcome for Person Receiving an Attorney-Client Relationship
1 No Yes
2 Yes Yes
3 Yes No
4 No No

The problem is, because of the lack of research-based information in the law, we don’t actually know how to identify which clients or cases will fall into which rows. Now also imagine there are multiple columns showing different levels of legal assistance, methods for delivering such services, and types of cases. Then, it gets more complicated.  That’s what we need rigorous research for.

And really, if you think about it, the fact that the stakes can be high in the law generally is a reason to randomize and test, not to avoid doing so. When stakes are high, we should insist on rigorous evidence of effectiveness, not guesswork.

[1] Rhode, Access to Justice, Fordham Law Review

[2] The Legal Services Corporation, The Justice Gap: Measuring the Unmet Civil Legal Needs of low-income Americans, June 2017, https://www.lsc.gov/sites/default/files/images/TheJusticeGap-FullReport.pdf