By Michael Pusic, J.D. Candidate, Harvard Law School
STUDENT VOICES: The views expressed below are those of the student author and do not necessarily reflect the position of the Access to Justice Lab.

Brazil’s courts are turning to artificial intelligence to address their backlog of more than 100 million cases with systems that prioritize cases, draft judicial opinions, and even propose decisions. But is the cure worse than the disease? While these systems bring much-needed efficiency to Brazil’s courts, they raise due process concerns that merit rigorous evaluation.
Brazil’s Use of AI in the Judiciary
Brazil’s courts face one of the largest backlogs of any judicial system in the world with nearly 102 million cases awaiting final decisions. Even excluding appeals, it takes an average of 600 days for a case to be resolved—nearly three times as long as the average time to resolve a case in Europe. Justice delayed often means justice denied, as victims of domestic violence await restraining orders, families are destabilized by prolonged custody battles, or tenants face housing insecurity while eviction orders languish in court. To reduce these delays, the court has turned to AI.
AI serves a variety of functions in Brazil’s court system but can broadly be understood within three categories: prioritizing cases, supporting legal research, and drafting documents.
First, courts use AI to classify and prioritize cases. The Federal Supreme Court’s VICTOR system sorts through thousands of appeals to identify those where a decision from the Court would affect a large number of similar cases nationwide and so deserves expedited review. (Brazil has two supreme courts with distinct roles: the Federal Supreme Court (STF) handles constitutional matters, while the Superior Court of Justice (STJ) serves as the final appeal court for non-constitutional federal cases.) At the state level, systems like LARRY in Paraná cluster similar legal demands to streamline case management and reduce redundant analysis. In the states of Minas Gerais and Acre, AI systems match incoming cases with existing precedents to determine which can be easily decided and which require individualized attention. Specialized tax courts use tools like ELIS to screen out cases that a time bar or another clearly applicable statute render easy to decide.
Second, judges and clerks use AI systems to support research by identifying and summarizing relevant precedents. The STJ uses the SOCRATES AI system to review each submission, isolate core legal issues, and recommend pertinent legislative materials and legal precedents that could inform the decision. As mentioned, courts in the states of Minas Gerais and Acre similarly use AI to find relevant precedents. Small claims courts use the Automated Semantic Annotation Pipeline to annotate cases with applicable case law, statutes, and other resources.
Finally, AI systems in Brazil will summarize documents, suggest judicial actions, and even draft opinions. The STJ is updating their SOCRATES system so that it can present “judges […] with all the elements necessary for the judgment of their cases, such as the description of the parties’ theses and the major decisions already taken by the Court concerning the subject of the case.” In the Federal Regional Court of the 1st Region, the Hércules system recommends procedural steps and automates routine actions like issuing summons or requesting documentation. The state of Minas Gerais goes furthest, using the SOFIA system to analyze initial filings and draft preliminary rulings for judge review, particularly in simpler or repetitive cases.
The Risks of (Not) Using AI
The delays in Brazil’s judiciary are untenable and demand technological innovation, but there is much at stake in how AI is adopted.
Without automated prioritization, cases that require urgent attention (e.g., domestic violence, pending eviction orders) will face the same delays as those that do not. But the prioritization of cases is value-laden, and a single algorithmic error could amplify delays for thousands of cases.
Similarly, while it may seem innocuous to have AI systems identify relevant legal precedents, the process of distinguishing cases is nuanced and often outcome-determinative. Cases that seem identical at first glance may differ in their facts, or a party may seek a creative application of precedent. That said, litigants may be willing to accept an AI system that produces a slightly higher rate of imperfect decisions in order to have their cases resolved in a timelier manner. Indeed, this balancing of justice and finality is at the heart of civil procedure in many legal systems.
The stakes are highest when AI proposes judicial actions or drafts opinions. AI systems might be able to recommend decisions that reduce racial and cognitive biases, delivering greater justice and consistency across cases. But this consistency also means algorithmic errors could systematically affect thousands of cases—particularly as judges are prone to rubber stamp suggested decisions. This provokes the question: is a system where diverse and possibly numerous human errors are distributed throughout the system preferable to one where uniform algorithmic errors replicate across thousands or even millions of cases?
To mitigate the risk of such errors, the Brazilian National Council of Justice (CNJ) has passed two resolutions establishing guidelines for the use of AI in courts. The first, passed in 2020, largely focused on data privacy concerns, regulating the collection, storage, and sharing of judicial data. The second, passed in 2021, created a national committee to oversee the use of AI in the judiciary and set forth more substantive requirements for how AI may be used in the courts. For instance, it requires that any AI “reasoning” be clearly explained and justified to the litigants and that systems be evaluated on a periodic basis to ensure their compliance with applicable legal and ethical standards.
Despite the requirement of regular evaluation, both resolutions have been light on enforcement. In part, this is an analytical issue—it is difficult to audit AI systems for compliance with broadly articulated guidelines. But it is also an understudied area; there have been no public reports on the quality of compliance with the 2020 and 2021 resolutions, nor even descriptive statistics on case outcomes before and after courts adopted AI systems.
The Need for Evaluation
While most AI systems were tested prior to adoption, they likely produce unintended consequences when placed into real courtrooms. Brazil should not abandon these tools and revert to extreme delays, but it should look at real-world evidence to understand the effect AI systems are having on various metrics of procedural justice and case outcomes.
Randomized controlled trials offer one method to isolate the causal relationship between AI and due process. A study could randomize which cases receive AI-powered support in annotating legal briefs, identifying relevant precedent, or drafting decisions. One could then compare treatment and control groups in terms of the time it took to resolve the case, whether the decisions were reversed on appeal, the consistency of decisions, the litigants’ sense of procedural justice, and disparate impacts across demographic groups.
Short of a full-fleshed randomized controlled trial, the staggered adoption of AI across different courts and jurisdictions creates natural variation that could be exploited for analysis. Researchers could use fixed effects or difference-in-differences approaches to compare outcomes before and after implementation relative to courts that have not yet adopted these technologies, controlling for court-specific fixed effects.
Brazil stands at a critical juncture. Without AI, millions remain trapped in limbo, waiting years for decisions that might determine their housing, safety, or livelihood. But embracing AI without scrutiny risks replacing one injustice with another—algorithmic errors that could systematically disadvantage entire classes of citizens. The true innovation would be developing a third path: deploying AI while rigorously evaluating its effects, pioneering a model where continuous learning creates technological solutions that are both efficient and just.

