The rise in court-hosted online dispute resolution (“ODR”) is noticeable. But does it work? It seems intuitive that if courts are widely deploying ODR, touted to improve access to justice by removing the inconvenience of—some argue any necessity to—having to visit a physical courthouse to participate in one’s case, we would find strong evidence to support such wide adoption.
That strong evidence base does not currently exist. In fact, no credible evidence as to ODR’s effectiveness, one way or another, exists. We tried ourselves to investigate whether ODR works. Unfortunately, we still don’t know.
If you are reading this post, you likely know already that at the Access to Justice Lab at Harvard Law School, we focus on credible, meaning almost always randomized, evaluation to understand the direct causal impact of those interventions using the randomized control trial. In this post, we summarize two evaluations completed in partnership with the Iowa Judicial Branch and the 11th Judicial Circuit of Florida, invaluable partners.
In the coming months, you can watch for a longer publication going into more detail. Here we will proceed as follows: first we will remind readers of why randomization is an important component to a credible evaluation design. Then we will summarize our two studies, which differ in goals and design, but neither of which tested whether the key question of whether ODR (versus no ODR) affects relevant outcomes. Finally, we will end with lessons learned and opportunities for future research/court partnerships.
The Importance of Randomization in Evaluation Designs
Why are we so committed to evaluations incorporating randomization? Randomized studies select groups statistically identical to one another except for that one is not exposed to an intervention or program (here, the availability of an ODR platform), allowing us to know with as much certainty as our current knowledge systems allow that the reason for any observed differences in outcomes is the intervention or program. By contrast, a commonly used methodology that compares outcomes prior to an intervention’s implementation to outcomes after the implementation could be rendered of little value by changes, fast or evolutionary, occurring at about the same time as the intervention. Such factors might include a change in presiding judge, a new crop of mediators or lawyers working on these cases, a change in mechanism to access the court such as by phone or synchronous or synchronous online interaction (uncoincidentally, similar to how ODR works), a change in filing fee amount, a change in way cases process through the court, or change in thinking among members of the bar regarding what is trial worthy and what is better to settle. The gold-standard RCT neutralizes these potentially influencing factors as much as we currently know how to neutralize them.
It was the Best of Times, it was the Worst of Time: A Tale of Two Studies
As suggested above, the threshold question of whether ODR works or not is not yet answered. Before we understand what components of ODR make it better or worse, we need to know if the concept overall works. We went into these evaluations hoping to investigate that important first inquiry. We came closer in Florida.
In Florida we attempted to use an encouragement design. We sought to answer the question of whether providing encouragement to use an ODR platform to resolve traffic compliance matters results in more use of the platform and, if so, whether those that use the platform experience better outcomes as compared to those who do not. The hope was that people receiving encouragement would do the thing we encouraged them to do at much higher rates than those who did not receive the encouragement. If that hope had been realized, then by randomizing encouragement to use the intervention—giving encouragement to some and not others, randomly selected—we would have effectively been randomizing the intervention.
Encouragement came in the form of a postcard. Individuals with eligible alleged traffic infractions were randomly assigned to receive this encouragement or not. Nothing else about their case changed: law enforcement still issued the same citations, cases were still scheduled as a matter of course with the court, and they proceeded if no other action was taken prior to that scheduled date (such as paying the ticket or using the ODR platform to show remediation of noncompliance). We ended with 289 study participants getting the encouragement and 274 not.
Encouragement designs work only if the encouragements . . . er . . . encourage. Our postcard didn’t. We weren’t terribly optimistic that it would. But we were unable to persuade stakeholders to adopt a stronger design.
In other words, our two groups used the ODR platform at nearly the same rate, meaning we are not able to untangle the effect of ODR itself from other possible outcomes.
What we did see in our data is a possible reminder effect to a large degree. We saw those that received the encouragement postcard were more likely to appear at their subsequent court events at a rate of twelve percentage points. And, perhaps a product of that appearance, those receiving the postcard were more likely to resolve their case at a rate of twenty-five percentage points. Previous researchers have observed reminder effects, usually with postcards or text messages about hearings, but that those have tended to be in the five-eight percent range. The fact that we’re observing larger effects leads us to hypothesize that the combination of a reminder and a convenient method of resolution may be larger than the reminder effect alone.
What we think we see in this data is a lot of people were able to access the ODR platform and did so notwithstanding encouragement. But, subsequently, those that received the encouragement paid more attention to their cases than those who did not. We cannot be definitive about this finding. This is not the outcome for which we were testing. But, it is a hypothesis that emerged and deserves more attention from future research.
The Iowa study differed in goals and design. The Iowa Judicial Branch deployed the ODR platform, for the purposes of this evaluation, for a handful of pre-selected traffic infractions. The Iowa Judicial Branch agreed to randomize neither access to the platform nor encouragement to use it. Instead, the Branch agreed to randomize information about payment plans available via the court system as well as information about what prosecutors ordinarily negotiate. Randomization would occur for litigants who created accounts to use the platform.
In the case of information about payment plans, that information was not a secret but also was not openly available, either. Usually it required a litigant to affirmatively ask for a payment plan rather than being an option the litigant could select.
In the case of reasonable expectations, the idea was that giving the litigant some information about what a prosecutor might offer in a plea might help the litigant to decide to pursue negotiation or resolve the matter more quickly (presumable by paying or expediting a not guilty plea).
We were not particularly optimistic that randomization of information of this type would produce much of a treatment contrast. As it turned out, the issue was not the treatment contrast but the platform. During the several months of enrollment, no one used it. Actually, over the course of enrollment, only one participant successfully made it through the platform. Volume did tick up slightly after enrollment closed, but remained too low for an evaluation.
Lessons Learned and Opportunities
Some themes emerged from both evaluation attempts which we think are useful to courts as they move to ODR 2.0. We will use this section to highlight some. Not all applied in each jurisdiction; some applied in both.
Informing the User about the Option
It seemed that courts did not make substantial efforts to inform the user about the option to use ODR. Neither of our evaluation jurisdictions implemented a program mandating the use of ODR to resolve the selected use, which likely would de facto serve to inform the user about the platform. Instead, both made use of ODR opt-in. Litigants cannot opt into something unless they know it exists. In traffic matters, most courts attempt to alert litigants to the existence of ODR by including text about it on citation forms. In most jurisdiction that we have observed, in this study and others, citations are not a good forum for notifying anyone of anything important. They are packed full of dense text with numerous unintelligible statutory citations. The notification of the ODR option amounts to a URL that may not intuitively appear connected to the court. Notwithstanding the URL and the option to resolve one’s case without attending a court hearing, the citation has a court date at which, the citation says, the alleged offender is compelled to appear. It is easy to see how litigants may get confused and/or disregard the ODR option.
Expanding Eligibility Guidelines
In implementing ODR, some courts fear that a deluge of users will flood the platform, making the platform cumbersome for the court to manage. This fear can result in narrowly drawing eligibility guidelines. That could be a method to thin the herd of users. But attempts to clarify who can and cannot use an ODR platform may also confuse potential users, resulting in some eligible users concluding that they cannot use the platform and some ineligible users concluding the opposite. When determining eligibility for ODR, and perhaps any program, we should consider what will make logical sense to the user. Particularly given the Iowa experience, broad rather than narrow eligibility guidelines may help to increase usage.
Courts often develop processes around the idea that judicial time should not be wasted. Likely the process discussed above of providing in the citation both information about ODR while also scheduling the next hearing is to preserve judicial time and to keep cases moving. Internal processes around deadlines for users to complete ODR developed with likely the same goal in mind. However, these ODR-specific internal deadlines are not, as far as we were able to observe, communicated to the user, and they differed from the live-appearance deadlines. This results in individuals otherwise eligible to use the platform being unable to do so because an undisclosed deadline for platform use has passed. After the passing of this deadline, the only option is to appear in court to resolve the matter. ODR-specific deadlines should be communicated. Indeed, we wonder what harm there is in either letting someone resolve their case using the platform right up until the day of the hearing, or only scheduling a hearing after a disclosed deadline passes, with the latter option seeming to accommodate both the robust use of the platform and the preservation of judicial time.
Increasing Access to Justice with Simplification and Reminders
Not everything went as we had hoped in these two evaluations. We came away with some thoughts on improvement of court installed ODR platforms. We also came away with a hypothesis that could be a boon to the access to justice community. The idea that combining tools that are separately thought of as improving access to justice (here, reminders and a tool to ease case resolution) might work. The takeaway here is that simplified processes coupled with reminders may create a significant increase in usage of tools designed, we think, to improve access to justice. We are looking forward to testing, and observing others test, this hypothesis in the future.
Support for this project was provided in part by The Pew Charitable Trusts. The views expressed herein are those of the author(s) and do not necessarily reflect the views of The Pew Charitable Trusts.
 There are a handful of empirical studies that investigated user experiences in ODR processes. See, Martin Gramatikov & Laura Klaming, Getting Divorced Online: Procedural and Outcome Justice in Online Divorce Mediation, 14 J.L. & FAM. STUD. 97, 117 – 8 (2012) (“finding high levels of satisfaction with online divorce procedures and quality of outcomes of both male and female divorcees in the Netherlands, although the former focused more on monetary and time costs while the latter focused on negative emotions”); See, Katalien Bollen & Martin Euwema, The Role of Hierarchy in Face-to-Face and E-Supported Mediations: The Use of an Online Intake to Balance the Influence of Hierarchy, 6 Negotiation and Conflict Management Research 4:305 – 19, 313 (2013) (“finding that a hybrid process combining online intake with face-to-face mediation had an equalizing effect in hierarchical labor settings on parties’ fairness and satisfaction perceptions”); See, Marc Mason & Avrom H. Sherr, Evaluation of the Small Claims Online Dispute Resolution Pilot, Institute of Advanced Legal Studies, at 19 (Sept. 1, 2008), (finding a lower settlement rate than offline small claims mediations, as well as problems such as the online system timing out, the registration process, spam filtering, a lack of transparency, and digital access and competency, although the study was limited in scope and only had a sample size of 25 cases in the UK); See, Laura Klaming, Jelle van Veenen and Ronald Leenes, I Want the Opposite of What You Want: Reducing Fixed-pie Perceptions in Online Negotiations, 2009 J. DISP. RESOL. 139:85 – 94, 92 – 93 (finding that “providing negotiators with incentives independent from the resources that have to be divided, as well as providing them with information about the opponent’s preferences, led to more agreements”); See, Udechukwu Ojiako et al., An Examination of the ‘Rule of Law’ and ‘Justice’ Implications in Online Dispute Resolution in Construction Projects, 36 International Journal of Project Management 301, 305 , 308(2018) (“finding that the ODR process does not affect parties’ satisfaction with the “rule of law” or “justice” in small claims ODR in construction projects, while suggesting further research on the cultural contexts around these concepts”), Due to the limited scale on which ODR has been implemented in American courts, there are few independent efforts to quantify the outcomes of ODR initiatives in the public sector. “There are some self-reported pre-ODR and post-ODR datasets, mostly compiled by courts and private platforms and unconfirmed by independent research” (See, Joint Technology Committee Resource Bulletin, Case Studies in ODR for courts; A View from the Front Lines, 3 – 18 (2017); Amy Schmitz, Expanding Access to Remedies Through E-Court Initiatives, 67 Buff. L. Rev. 89, 158 (2019); Kevin Bowling, Jennell Challa, & Di Graski, Improving Child Support Enforcement Outcomes with Online Dispute Resolution, Trends in State Courts, 43 – 8, 46 (2019), and Avital Mentovich, J.J. Prescott, & Orna Rabinovich-Einy, Are Litigation Outcome Disparities Inevitable? Courts, Technology, and the Future of Impartiality, 73 Ala. L. Rev. (2020) at 893.
 See, Joshua D. Angrist, Instrumental Variables Methods in Experimental Criminological Research: What, Why and How, 2 Journal of Experimental Criminology 23 – 44, 24 (2006) (arguing that randomized studies are considered the gold standard for scientific evidence).
 Conner Mullally, Steve Boucher, & Michael Carter, Encouraging Development: Randomized Encouragement Designs in Agriculture, 95 American Journal of Agricultural Economics 5:1352 – 8, 2 (2013) (defining the encouragement design); Paul J. Ferraro, Counterfactual Thinking and Impact Evaluation in Environmental Policy, Environmental Program and Policy Evaluation: Addressing Methodological Challenges. New Directions for Evaluation, 75 – 84, 80 (2009) (suggesting the encouragement design may be appropriate when randomly restricting access to an intervention cannot be done). One assumes Ferraro means literally cannot, or, in other words, ethically impermissible to, rather than a preference not to, randomly restrict access. As an aside, Ferraro’s description of the need for rigorous evaluation in environmental policy is directly analogous to the need for same in the legal sphere. See generally, D. James Greiner & Andrea Matthews, Randomized Control Trials in the United States Legal Profession, Annu. Rev. Law Soc. Sci. 12:295 – 312 (2016).
 Id., Mullally et. al., (“Assignment to th ‘encouraged’ group is then used as an instrumental variable in order to estimate the impact of the treatment”).
 For more information about reminder effects, See, e.g., Brian H. Bornstein, et. al., Reducing Courts’ Failure to Appear Rate: A Procedural Justice Approach, (May 2011), https://www.ojp.gov/pdffiles1/nij/grants/234370.pdf (evaluating the effectiveness of different messaging approaches in mailed post cards); Timothy R. Schnacke, et. al., Increasing Court Appearance Rates and Other Benefits of Live-Caller Telephone Court-Date Reminders: The Jefferson County, Colorado, FTA Pilot Project and Resulting Court Date Notification Program, 393 Ct. Rev.: J. Am. Judges 86 (2012) (evaluating the effectiveness of providing information about the consequences of failing to appear by live-calling individuals); Christopher T. Lowenkamp, et. al., Assessing the Effects of Court Date Notifications within Pretrial Case Processing, 43(2) Am. J. Crim. Just. 167, 173 (2017) (evaluated the effectiveness of different messaging approaches and different methods of delivering notifications); Brice Cooke, et. al., Using Behavioral Science to Improve Criminal Justice Outcomes: Preventing Failures to Appear in Court, (January 2018), https://www.prisonpolicy.org/scans/Using_Behavioral_Science_to_Improve_Crimina_Justice_Outcomes_Cooke_et_al_2018.pdf (evaluated the effectiveness of the timing and messaging approaches in text notifications); Stephen H. Taplin, et. al., Testing Reminder and Motivational Telephone Calls to Increase Screening Mammography: A Randomized Study, 92(3) J. of the Nat’l Cancer Inst. 233 (2000) (finding reminders to be as efficacious as addressing barriers with phone call reminders performing better than postcards); Susan Maxwell, et. al., Effectiveness of Reminder Systems on Appointment Adherence Rates, 12(4) J. of Health Care for the Poor and Underserved 504, 508 (2001) (finding show rates for appointments for those who received no reminder to be 49.9% as compared to those that received a mailed reminder to be 52.1%); Mary Elaine Koren, et. al., Interventions to Improve Patient Appointments in an Ambulatory Care Facility, 15(4) J. Ambulatory Care Mgmt. 76 (1994) (finding insignificant difference between the type of reminder, when using phone or mailings, but did find some reminder to be more effective than no reminder).
 See, Id.