By Strong Ma, J.D. candidate, Harvard Law School
STUDENT VOICES: The views expressed below are those of the student author and do not necessarily reflect the position of the Access to Justice Lab.

Fears of machines replacing human decision-making have haunted popular fiction—think 2001: A Space Odyssey, I, Robot, even Wall-E. Meanwhile, real-world efforts have workshopped moral judgements by algorithms, such as self-driving cars facing variations of the classic “trolley problem.” These contexts make us consider a question: How do people feel about the fairness of algorithms making decisions as compared to humans?
Perceptions of algorithmic fairness can have significant consequences for issues in access to justice. In my last blog post, I discussed how narratives around pretrial risk algorithms may have contributed to the failure of ending cash bail in California. Some of these narratives invoked general fears of a “robopocalypse,” while others focused on concerns of algorithms amplifying racial disparities. But leaving it to humans may not be all that appealing, either—lawyers, judges, and juries demonstrate time and again the concerning biases that plague the legal field.
How we perceive the fairness of algorithmic decision-making thus deserves examination, as it will influence the landscape of possible justice reforms and the messaging that should accompany such reforms. Perhaps law can learn lessons by looking to the experience of another field in which professional decision-making has life-changing consequences—medicine. Surprisingly, such a comparison shows that (1) people seem more willing to accept algorithms in medicine than in law, and (2) perception of algorithmic decision-making may be more optimistic than popular media may seem to indicate.
Algorithms in Medicine
A recent New York Times investigation into deviations from organ transplant lists presents a fascinating example. For decades, algorithms developed by medical experts have determined priority in the national organ transplant list. But, as the Times found, many transplant organizations have increasingly been going off-list, giving “open offers” (where a hospital can transplant an organ in any patient) that sometimes skip hundreds of higher-ranked people. Though organizations primarily justify open offers as speedily placing organs that would otherwise expire, the Times observed no decrease in discarded organs as open offers increased. Some critics thus characterized open offers as prioritizing ease and bottom lines over fairness. Moreover, open offers mostly benefited “favored” hospitals and exacerbated racial and gender disparities. One doctor remarked, “We’ve built this system to try to be fair to people, and this just seems so unfair.”
Thus, it seems that for organ transplants, the public and the profession perceived algorithmic decisions to be fairer than human ones. Of course, this phenomenon may alternatively be explained by a perception that it is unfair to deviate from a predetermined system, regardless of whether that system deploys algorithmic or unguided human decisions. Nevertheless, other research has shown that people perceive algorithms in medical decisions at least somewhat favorably:
- A 2023 study of UK respondents found 73% to be no less likely to donate their liver if AI was used to identify the recipient, while 82% thought that the AI was less likely than humans to make biased decisions.
- A 2023 Pew Research survey that a larger share of United States respondents think that AI would make the problem of bias and unfair treatment in medicine better (51%) rather than worse (15%). The rest thought it would stay about the same (33%).
- A 2020 study in Holland found that respondents perceived AI to be fairer than human experts in deciding whether to provide a special medical treatment.
Even when highlighting the potential for organ allocation algorithms to be biased against minorities, one article argued that AI was more fair than the alternative: “Unfortunately, without algorithms, judgment is left entirely up to bedside clinicians, and patients receiving scarce, needed assistance are often chosen based on the personal, inherent biases of the clinicians involved.” These examples suggest that the public and experts do not categorically view algorithms to be less fair than humans in medical judgments (though, of course, the evidence is not universal).
Algorithms in Law and Other Contexts
In contrast, many studies have shown people perceive algorithms as less fair and desirable than human decision-makers in legal contexts. One 2018 study found that American respondents strongly viewed algorithms to be less fair than humans for bail decisions. Even when a minority of American respondents found a bail release algorithm to be fair in a 2020 study, they still preferred a human judge. Another 2022 study found that Germans perceived algorithms to be much less fair for early prison release decisions.
A 2022 article from Professors Derek E. Bambauer and Michael Risch complicates the negative outlook presented above. The article investigated American preferences in choosing between algorithmic and human decision-makers in four contexts: approval of loan applications, entry into a clinical trial for medical therapy, legal liability for a civil traffic offense, and determination of gift card winners. While the study did not ask the 4,000 respondents specifically about the fairness of algorithms, it did find that respondents on average chose the algorithm over the human more than half the time (52.2%). As the stakes of the decision rose, preference for algorithms decreased. Only 50.2% of respondents chose the algorithm in the medical setting, and 44.0% chose algorithms in the civil legal setting. The paper further noted that in a prior pilot study, respondents surprisingly showed no difference in preference for algorithms deciding criminal versus civil liability.
The decrease in preference for AI from the medical to the legal context supports the idea that perceived algorithmic fairness may categorically differ between the two fields. According to the Bambauer and Risch, these findings also generally “suggest[] that moral panics over algorithms are overstated. . . in stark contrast to the algorithmic skepticism that dominates media coverage.” Even in determining a hefty traffic fine, nearly half chose algorithms over humans. Other studies offer limited support for this rosier view. For instance, Dutch respondents perceived AI to be fairer than humans in higher-impact scenarios, including in the exercise of prosecutorial discretion.
Conclusions
Systematic literature reviews on the perceived fairness of algorithms show conflicting results that depend on context and definition of fairness. Many studies also indicate a difference in perceptions between algorithms assisting human decision-makers and algorithms deciding alone. There may be systemic cultural differences, which implies this post’s use of research from different countries. Thus, the picture sketched here—that algorithms are perceived to be fairer in medicine than in law—should be taken with caution. A study more directly comparing reactions to algorithms in medicine and law, while varying factors such as decision types (allocative, diagnostic) and definitions of fairness, would fill fascinating gaps in research. However, just hypothesizing a law-medicine distinction also raises valuable considerations.
As one systematic review of RCTs in law theorizes, there is likely a background difference in perspective between the medical and legal fields. This difference may stem from the myth that each legal case is irreducibly complex and a self-aggrandizing veneration of lawyerly judgment, linking to a prevalent tension in law between individualization versus consistency. Another explanation may be that the adversarial nature of many legal contexts lends to heightened algorithmic aversion, which researchers have posited to stem from a fear of losing control, an illogical unwillingness to forgive algorithmic error, and emotional reactions to certain contexts. Challenging these notions of the law (in comparison to medicine) may go a long way in bringing a breath of fresh air to how we think about legal reforms—especially algorithmic ones.
At the very least, multiple studies have shown positive public perceptions on the fairness of algorithms, indicating that we should be more optimistic about their reformative use than popular media and pundits may suggest. Regardless of how people feel about the fairness of current algorithms and current humans, consider also how we should feel about their future fairness. For example, algorithmic unfairness may soon be detected and addressed faster than its human counterpart due to large datasets and consistent, malleable processes. Perhaps we are already there. As one commentator puts it, “[B]iased algorithms are easier to fix than biased people.”
If you’re interested in more on this topic, listen to our Proof Over Precedent podcast episode.

