By Elizabeth Guo, J.D. Candidate, Harvard Law School
STUDENT VOICES: The views expressed below are those of the student author and do not necessarily reflect the position of the Access to Justice Lab.

Ten years ago, if you asked a pro se litigant what resources she used, she would probably say one of the following: “Google,” “the court website,” “in-person court clerks and self-help staff,” “friends and family.” She might also tell you that online forms were hard to find, the court’s website was confusing, courthouse staff refused to answer seemingly basic questions, and her friends were unhelpful.
Today, if you asked a pro se litigant that same question, she would likely say “AI,” referring to general-purpose large language models (LLMs) such as ChatGPT, Claude, and Gemini. Over the past year, several headline cases have featured pro se litigants using LLMs to navigate their cases. Sometimes that approach was successful. Other times it ended in court admonishments over hallucination-riddled filings.
This growing AI-as-lawyer phenomenon raises the question: Can general-purpose AI providers such as OpenAI, Anthropic, and Google face liability for “unauthorized practice of law” (UPL)? UPL rules in the U.S. essentially “restrict anyone who is not a lawyer from providing legal assistance.” UPL rules are why courthouse staff and even paralegals are prohibited from giving legal advice (whatever that phrase means). UPL is also not necessarily limited to human non-lawyers. State committees and bar associations have previously pursued UPL theories against software, websites, and apps. There are other types of UPL rules as well, including rules that address lawyers practicing law outside the jurisdictions where they are admitted. This post focuses on the UPL rules governing the practice of law by non-lawyers.
This post is the first of two exploring the status of general-purpose AI providers under current UPL regimes. I will begin by providing background on UPL rules in the U.S. and conclude explaining why current UPL rules likely permit general-purpose AI. The next post will explore the normative question of whether UPL rules should permit general-purpose AI.
Background on UPL
During the Great Depression, state bar associations began unleashing UPL suits against a range of “group-legal-service providers”—from auto clubs to unions to homeowners’ associations. That campaign gave rise to the modern UPL regime. As Nora Freeman Engstrom and James Stone wrote, in theory “UPL rests on a simple idea: for society’s ‘benefit and protection,’ only qualified and licensed individuals should be permitted to practice law.” In reality, Engstrom and Stone wrote, UPL rules “grew out of the bar’s self-interest.” (Proof Over Precedent readers will recall student Andrew Reed’s podcast interviewing Professors Engstrom and Stone as well as his post focusing on the history of auto clubs.)
All fifty states and D.C. have laws prohibiting UPL, but there is no consensus among states on the definition of “practice of law.” Alaska, for example, has a narrow definition: A non-lawyer must be holding herself out as an authorized attorney to be liable. By contrast, Georgia’s definition is expansive and sweeps in anyone other than a duly licensed attorney. Neil Gorsuch described the various “definitions states have adopted, usually at the behest of local bar associations,” as “often breathtakingly broad and opaque.” In 2002, an American Bar Association taskforce tried to draft a standardized definition of “practice of law” but gave up the next year, suggesting that states resort to “common sense” instead.
UPL enforcement is rare. Both state and private actors (i.e., attorneys and bar associations) can bring civil actions against alleged non-lawyer violators. State actors can also pursue criminal sanctions against violators.
Note the central tension between UPL laws and access to justice. On the one hand, the stated justification for UPL laws is consumer protection. On the other, UPL laws restrict access to justice by denying pro se litigants access to competent non-lawyers. UPL laws also discourage innovation in client-facing legal technology.
UPL Likely Permits General-Purpose AI
Although no court yet has squarely addressed the question, state UPL laws likely allow general-purpose AI at least to serve as a resource for pro se litigants.
To begin, there is the classic distinction between legal advice and legal information. Generally, courts find that giving the former constitutes the practice of law, while giving the latter does not. General-purpose AI providers are now positioning themselves as tools that give only legal information—seemingly by shifting the burden to end-users to use their tools appropriately. As of October 29, 2025, OpenAI prohibits the use of its services for “provision of tailored advice that requires a license, such as legal…advice, without appropriate involvement by a licensed professional.” As of September 15, 2025, Anthropic’s usage policy classifies “legal” questions as a “High-Risk Use Case,” meaning that if an end-user uses outputs to advise consumers, a qualified professional must first review the outputs, and the end-user must disclose her AI usage. And as of August 9, 2023, Google has a disclaimer against relying on its generative AI services for legal advice.
Despite this positioning, less clear is whether in practice these tools infallibly avoid crossing into legal advice. On February 15, 2026, I tried asking the latest free version of ChatGPT (GPT 5.2): “I’m involved in a legal dispute. Can you tell me whether to settle or proceed to trial?” ChatGPT responded: “I can give you a structured way to think about it, but I can’t tell you what you should do—that depends heavily on facts, jurisdiction, risk tolerance, and advice from your attorney…If you’d like, tell me: Is this civil or criminal? Roughly how much money is at stake? How far along is the case? What has your attorney said about your odds? I can then help you think through it more concretely.” Is that information or advice?
Geography matters. As noted above, Alaska has a narrow definition of UPL with a holding-out requirement, so under Alaska law, an end-user policy or disclaimer might just do the trick. Other states build in some version of a holding-out requirement specifically when it comes to websites and software. For example, Texas law states that the “practice of law” does not include the design or creation of a website or software “if the products clearly and conspicuously state that the products are not a substitute for the advice of an attorney.” Under regimes like that of Texas, AI providers may escape liability if they include such a statement (and are treated like “website or software” creators).
There are other reasons to think general-purpose AI providers will survive UPL rules. One is that LLMs lack the hallmarks of a traditional attorney-client relationship: There are no engagement letters or conflict checks involved, and there is likely no attorney-client privilege. Another is that pre-LLM legal technologies have survived UPL challenges before. As Ed Walters wrote, eDiscovery software arguably displaced attorney judgment on document responsiveness and privilege, yet no UPL backlash ensued. The North Carolina bar and online document-preparation service LegalZoom battled over UPL for years. LegalZoom survived, too. In 2015, it agreed to disclose that its forms were not substitutes for attorneys and to have lawyers review the blank templates it offered. The bar, in exchange, agreed to support a law that ultimately excluded certain interactive document software from North Carolina’s definition of “practice of law.”
Perhaps ironically, UPL risk seems greatest when legal tech involves human interaction, if that human is a non-lawyer. By eschewing any human involvement, general-purpose AI tools avoid this problem. Other forms of legal tech have been less lucky. In 2011, a Missouri court denied summary judgment for LegalZoom on the issue of UPL, in part pointing to the role of human non-lawyers in finalizing the legal documents. In 2025, the Second Circuit vacated a free speech ruling for Upsolve, a legal advice nonprofit that uses both software and trained non-lawyer volunteers. LLMs, by contrast, do not insert human intermediaries into users’ legal inquiries.
AI providers may also have several (not yet tested) arguments available if faced with UPL enforcement. First, they may argue that UPL enforcement violates their First Amendment rights—though the Second Circuit’s ruling in the Upsolve litigation shows the uncertainty of this path. Second, AI providers may assert antitrust theories against state bars for monopolizing legal service provision. LegalZoom made this argument against the North Carolina bar, and in a 2023 letter the DOJ Antitrust Division signaled receptivity to the idea. Third, some scholars have suggested that UPL enforcement restricts AI providers’ due process rights to occupational freedom. Finally, AI providers may argue that UPL laws are unconstitutionally vague.
In sum, general-purpose AI providers will probably survive today’s morass of UPL laws.
The final question for this post is whether there is a difference between general-purpose AI and specialized “legal AI” tools. There are several categories of legal AI tools. For one, it is a reasonable inference that any legal chatbots developed by courts should face no difficulty with UPL rules. Meanwhile, commercial tools like Harvey and Legora are meant as resources for practicing lawyers, so non-lawyer UPL rules would likely not apply.
It is the commercial tools aimed at end users—tools such as DoNotPay, a self-described AI “robot lawyer”—that risk running afoul of non-lawyer UPL rules, likely more so than general-purpose AI tools. General-purpose AI providers can argue plausibly that they offer merely general-information tools. General-purpose AI providers also are not designed to prepare legal documents in the way legal AI tools are, and document preparation is explicitly named as a hallmark of legal practice in some state UPL laws. Indeed, the Colorado Bar Association has noted the distinction between general-purpose and specialized legal AI tools and observed that “general AI tools may not raise UPL concerns.”
If you’re interested in more on this topic, listen to our Proof Over Precedent podcast episode.

