Skip to main content

Charlotte Alexander Uses NSF Grants to Create an AI-Powered, Publicly Accessible Court Data Platform

Charlotte Alexander, professor of Law and Ethics, is working with a National Science Foundation grant to centralize U.S. federal and state court data for public access using AI and language models.
Headshot of Charlotte Alexander

Charlotte Alexander, professor of Law and Ethics

Imagine accessing court documents and data, both civil and criminal, in the state of Georgia through a free central repository. Now imagine this access across the entire U.S. court system.

Charlotte Alexander, professor of Law and Ethics at the Georgia Tech Scheller College of Business, is working on a project that uses AI to mine the text of court records. Her work includes pulling key pieces of information out of court documents and making it freely available to attorneys, judges, prosecutors, criminal defendants, civil litigants, journalists, policymakers, researchers, and any member of the public. 

Currently, court records are stored in systems that are expensive, fragmented, outdated, and hard to navigate. Alexander sees a lack of good data as a key problem impeding court reform efforts. Better data, she says, "would shed light on questions around efficiency and time of action, how long things take, and why there are delays. But it also raises big, heavy, substantive questions about bias and who wins and who loses. Does our legal system actually deliver justice, and if so, to whom?" said Alexander.

Her work, funded primarily through National Science Foundation (NSF) grants, is multi-faceted. She and a team of researchers received an initial grant from the NSF’s Convergence Accelerator Project, which was designed to fund efforts to create new sources of data and then make that data publicly available.

Working on the Federal Level

This initial work with colleagues at Georgia State University, Northwestern University, University of Richmond, and the University of Texas - Austin focused on the federal courts.

"When we started all of this on the federal level, we assembled court records from two full years of all federal cases filed, so everything filed in 2016 and 2017, we downloaded four years later. So, by 2020 and 2021, most of those cases had concluded. Now, we have this big snapshot of federal litigation, including comprehensive data on the progress, pathways, and outcomes of cases that we built using machine and deep learning tools on all those documents," said Alexander.

For example, Alexander provided a small glimpse into how this system might improve court operations. When plaintiffs file a civil case in federal court, they are responsible for a filing fee of $400. The fee can be waived, but individual judges make fee waiver decisions, developing their own separate sets of rules.

The research team's data extracted from court records showed that some judges granted more than eighty percent of waiver requests, whereas others granted fewer than twenty percent. (https://www.science.org/doi/10.1126/science.aba6914).

In other words, whether a litigant received a fee waiver depended on the luck of the draw – on the judge to whom the case was randomly assigned. This analysis has prompted courts to reconsider their fee waiver procedures to ensure greater consistency.

"We found in our conversations with judges that there's a lot of appetite for this type of system-level knowledge. And by that, I mean, 'I know how I manage the cases in my courtroom, but I don't really have a good way to know how other judges handle similar cases,'" she said.

Working on the State Level

Fast forward a few years, and Alexander is currently working to extend her work beyond the federal courts with funding from the NSF’s Prototype Open Knowledge Network (Proto-OKN) program, which supports the development of "an interconnected network of knowledge graphs supporting a very broad range of application domains."

"We've got all this data that we generated, and now we want to flesh it out further, and then feed it into this larger technical apparatus that the NSF is helping fund, which is the knowledge graph infrastructure," she said. "The NSF wants to map different pockets of knowledge so we might connect, for example, census tract level poverty data to different measures of economic development and economic activity to court data using the concept of a knowledge graph to organize all of these nodes."

Alexander and her collaborators received a $1.5 million grant to continue their work on court data access, but this time, on the state level. They are particularly interested in criminal case data from the state courts because, as she puts it, "most criminal prosecutions in the U.S. happen at the state level, not the federal level."

They're focusing on two initial sites: Georgia, beginning with Fulton and Clayton Counties, and Washington State. Using their experience in these two states, they hope to add data from other states and eventually build out a full picture of both criminal and civil litigation on both the state and federal levels.

AI and Machine Learning

With AI and machine learning, Alexander and her colleagues can identify and create results from their data more quickly than they would have even five years ago.

"In any case, civil or criminal, in either state or federal court, the court generates a docket sheet, which is a chronological list of events in the case. Descriptions can be very different using very different language, even if they're talking about the same underlying event,” she explained. “This variation in how court events are recorded makes it difficult to get a system-level view. So, we've used AI, particularly deep learning using large language models to train a model or a set of models to recognize all the different ways litigation events show up.”

Because her research reaches many disciplines, she plans to work with collaborators across Tech. She sees value in bringing in students from the Scheller College of Business and other schools including the College of Computing, Ivan Allen College of Liberal Arts, and Vertically Integrated Projects.

"If we solve the data problem, we're better equipped to attack the procedural and substantive problems around how the courts actually operate. What's exciting is the methodological advances in computer science and natural language processing that have cracked wide open the types of questions that are now answerable, which then allows us to change society for the better," said Alexander.

During the Fall 2023 semester, Alexander is on a Fulbright scholarship in Santo Domingo, Dominican Republic until December to study their digital transformation efforts within the court system and to explore using data to focus on diagnosing problems and creating more efficiency and transparency.

"A court is an organization and systems-level, organizational thinking about courts is not confined to the U.S. We can start to draw connections and collaborations across international boundaries, which I think is pretty exciting," she said.

Featured in This Story
Charlotte Alexander
Professor
stripes

This website uses cookies. For more information review our Cookie Policy

Source