By Eric Krebs, J.D. candidate, Harvard Law School

The Fourth Amendment to the Constitution prescribes—in very broad strokes—a basic form of police procedure that governs searches and seizures in the United States. Searches and seizures by police (with exceptions) require warrants. Warrants require a demonstration of “probable cause, supported by Oath or affirmation.” Police make these oaths and affirmations to judges, who review them for their reasonableness and approve or deny such searches and seizures.
“The effect of the Fourth Amendment” and the procedure it prescribes, the Supreme Court confidently remarked over a century ago, “is to put … officials, in the exercise of their power and authority, under limitations and restraints,” thereby “forever secur[ing] the people, their persons, houses, papers, and effects, against all unreasonable searches and seizures under the guise of the law.”
But is that actually true? Does the procedure prescribed by the Fourth Amendment actually achieve its promised effect? In the latest episode of Proof over Precedent, host Jim Greiner sat down with three scholars—Miguel de Figueiredo, Brett Hashimoto, and Dane Thorley—to discuss how their research sheds new light on whether the submission and review of warrants is a meaningful check on police power or a system of rubber stamps.
Together, de Figueiredo, Hashimoto, and Thorley are the authors of “Unwarranted Warrants? An Empirical Analysis of Judicial Review in Search and Seizure,” published this past June in the Harvard Law Review. The authors analyzed data from over 33,000 warrant applications in Utah, resulting in the largest empirical study of how judicial review of warrants works in practice ever published. The authors analyzed 33,465 search-warrant applications submitted through Utah’s electronic warrant (“e-Warrants”) system between 2017 and 2020. The idea for the study came after de Figueiredo, Professor of Law and Terry J. Tondro Research Scholar at the University of Connecticut School of Law, saw an article in the Salt Lake Tribune analyzing a month’s worth of warrants. He contacted Thorley, who thought “There’s no way that data exists, and there’s no way that it’s publicly available.’ It turns out it [did] exist, and it was publicly available.”.
The authors gained access to a trove of information, including the content of the warrant requests, whether they were accepted or rejected, the judge who reviewed the warrant, and how much time elapsed between the judge opening the request for the first time and their decision. Making sense of the data was no easy task. Co-author Brett Hashimoto, a legal linguist and Assistant Professor at the Brigham Young University College of Humanities, led the team in coding the data and employing Large Language Models to recognize patterns like boilerplate language, re-submissions and edits, and relationships between warrants—for instance, multiple warrants all related to the same investigation.
The authors noted the complexities inherent in analyzing such data: it’s not obvious what the approval rate should be, or how long judges should be spending reading each warrant. But the data suggested some troubling patterns.
All-in-all, the authors found that 98% of warrant requests were approved, with over 93% approved on their first submission. Review times were extremely short: the median review time was about three minutes, and for approved warrants, the time was even shorter: around 2 minutes and 50 seconds. Moreover, one in ten warrants was decided in one minute or less.
The median affidavit is just under 1,000 words—roughly three to four double-spaced pages. Many are far longer. Even using extremely generous assumptions about reading speed—up to 650 words per minute, well beyond normal reading or even skimming rates—a substantial fraction of warrants simply could not have been fully read in the time recorded. “For an adult reader, 650 words per minute would be flying through a document—skimming,” says Hashimoto. The authors plotted word counts against review times and showed that large numbers of warrants fall below even the most permissive “possible reading” thresholds. This still held true even after the authors made their estimates more conservative by removing boilerplate warrants like DUI searches from the analysis. “We still see huge portions that are going unread, even assuming these very conservative reading speeds,” says de Figueiredo.
Moreover, the pattern of review varied greatly from judge to judge. Some judges approved less than 80% of affidavits, while others approved nearly 100%. In analyzing these differences, the authors also noted other narrative-challenging patterns in the data. For instance, while the authors predicted the opposite, judges who were former prosecutors tended to be slightly stricter and spend more time reviewing warrant applications than those who had prior experience as defense attorneys. And judges who had criminal defense experience had median review times two minutes faster shorter than those without.
The authors emphasized that their study is not experimental, and their observations are not conclusive on the efficacy of warrant review procedures as a whole. But their data-first approach sheds unprecedented light on how a system so integral to the functioning of American policing actually works. “There are few topics that have been written on more in law reviews than search and seizure,” says Thorley, Professor of Law at Brigham Young University Law School. “Given how much airtime this takes up in legal scholarship and in court cases, it’s really incredible that we have as little information on the process as we currently do.”
If you’re interested in more on this topic, listen to our Proof Over Precedent podcast episode.

