Yale Law School Shapes the Future of Artificial Intelligence

Long before ChatGPT became a household name, Yale Law faculty were immersed in learning about legal pathways to regulating AI — as well as the technology’s potential.
In her Technology in the Practice of Law class, Professor Femi Cadmus has students experiment with AI-driven platforms as well as virtual reality headsets and other tools.

“At Yale Law School we don’t just teach students the law, we teach students how to teach artificial intelligence models the law,” said Scott Shapiro ’90, Charles F. Southmayd Professor of Law and Professor of Philosophy.

Shapiro’s students are building an AI model for use in media law with the DocProject, a program of the Media Freedom and Information Access (MFIA) clinic that provides pro bono legal representation for documentary filmmakers. 

Shapiro teaches courses on the philosophy of law, cybersecurity, and AI. With support from The Tsai Leadership Program, he plans to lead an AI lab in which students, programmers, and computer scientists will train “jurisprudentially responsible” AI models for use in legal clinics.

“One of the things people always say with AI is that data is sovereign and it’s hard to get good data. Our students produce incredibly high-quality data that gets thrown away. We’re trying to figure out how to recycle it and use it to train models,” said Shapiro. “What if we could take this data and use it to handle more documentaries — because each student is building on the work previous students have done?”

AI poses risks for lawyers and the legal profession — including “privacy and cybersecurity risks, the generation of inaccurate content, copyright infringement, and other intellectual property issues,” as noted by the office of the president of the American Bar Association.

But it also represents tremendous opportunity.

Long before ChatGPT became a household name, Yale Law faculty were immersed in learning about legal pathways to regulating AI — as well as the technology’s potential to introduce efficiencies in legal education and research and widen access to legal services.

Jack Balkin, Knight Professor of Constitutional Law and the First Amendment and founder and director of the Information Society Project (ISP), has been working on issues surrounding digital technology since the 1990s. He points out that the way people talk about AI now echoes the way they talked about the internet during its infancy. Balkin does not consider AI as an “existential risk … [although] it’s not surprising that it’s being treated that way because of the great uncertainty surrounding it,” he said. When the internet was born, “nobody could clearly see all of its potentials and dangers.” This is true of AI, too, he said. 

Scott Shapiro teaching class
In his courses, Professor Scott Shapiro takes a hands-on approach to using AI in legal education to better comprehend how it intersects with law.

But under the leadership of Dean Heather K. Gerken, Yale Law School has created physical and virtual space to explore the possibilities of AI for the legal profession, said Shapiro. 

The Tsai Leadership Program is poised to take a leading role in AI at the Law School — hosting visits from leading AI experts, supporting faculty-led ventures, and enhancing the curriculum.

For Shapiro, it’s very good news.

Everyone is focusing on the bad things. [But] being able to service low-income households and clinics so they could handle more clients — that’s intellectually exciting and challenging. That’s what motivates academics and scholars to solve problems people have always dreamed of solving,” he said.

Current Approaches to AI

As AI technology has continued to evolve, so have Yale Law School’s educational offerings. In a given week, students might attend a workshop on AI or seek library assistance with an AI product.

The discussions on AI are as interdisciplinary as the Law School itself. At the Solomon Center for Health Law and Policy at Yale Law School, a conversation that began with a groundbreaking conference in 2018 has continued to spotlight legal, ethical, and equity issues surrounding AI in healthcare through panel events and faculty research.

one person tries a virtual reality headset while another watches
Technology and Research Librarian Nor Ortiz shows the excitement of trying out virtual reality headsets.

Several classes at the Law School dig into problems posed by AI in different legal contexts. “Liability and Regulation at the Frontier of AI Development,” taught by Associate Professor of Law Ketan Ramakrishnan ’21, considers regulatory licensing and tort liability rules for harms caused by AI. In “Artificial Intelligence, the Legal Profession, and Procedure,” a seminar led by Alexander M. Bickel Professor of Public Law William Eskridge ’78, students consider whether AI is on course to automate legal procedure.

In the MFIA clinic, which Balkin founded and co-directs, students work on matters related to technology accountability and competition, participate in impact litigation, shape policy, and contribute to conversations on safe technology and the health of digital markets.

In 2021, MFIA began hosting the Tech Accountability & Competition project at the Law School with faculty supervision from Visiting Clinical Lecturer in Law David Dinielli. The project is dedicated to reducing harms caused by excessive use of power in digital marketplaces. 

In Shapiro’s AI classes and clinics, cross-disciplinary partnerships add depth to the subject. In 2016, Shapiro partnered with Gerard C. and Bernice Latrobe Smith Professor of International Law Oona Hathaway ’97 and Professor Joan Feigenbaum, chair of the Computer Science Department at Yale, on a cross-disciplinary “Cyber Conflict” course. Shapiro later teamed up with Sean O’Brien, Lecturer in Law and the founder of the Privacy Lab initiative at ISP, to teach the first iteration of his Cybersecurity course (which is now available online, hosted by Lawfare) — in which students learned to hack, so as to understand how to approach cybercrime in their practice.

In 2022, Shapiro and his collaborator, Yale Associate Professor of Computer Science Ruzica Piskac, won an Amazon Research Award for their proposal, “Formalizing FISA: using automated reasoning to formalize legal reasoning.” The award became a Yale College course entitled “Law, Logic and Security,” offered in fall 2022. Shapiro audited Piskac’s course on software verification, and the two “learned each other’s languages,” he said. “We prize interdisciplinarity at the Law School a great deal, but this was truly interdisciplinary in a very deep sense.

Lillian Goldman Law Library leadership and research instruction librarians have also taken a proactive approach to AI.

Femi Cadmus, Law Librarian and Professor of Law at the Law School, teaches a course called “Technology in the Practice of Law,” in which students experiment with AI-driven platforms like Lexis+ AI, Kira, and Relativity, as well as virtual reality headsets and other tools.

“You can’t teach every possible technology, but you can teach approaches to critically evaluating and assessing technology, [and] you can give them a framework so that when they’re entering a situation using technology they’re asking the right questions,” said Cadmus. 

In one class, she said, a student asked how lawyers using AI can be sure they are safeguarding the client’s data, privacy, and confidentiality. That was the right question, Cadmus said. “You have to check — is it secure? Where is the data coming from? Is it clean? Has it been reviewed?

“What I want them to understand is that technology is great, but it’s prone to misuse by bad actors,” Cadmus said.

An AI-generated image of Jason Eiseman sitting in the law library
This is not Jason Eiseman, Director of Library Technology and Planning at the Lillian Goldman Law Library, but a still from an AI-generated video of Eiseman. Eiseman and Nor Ortiz often begin their Practical AI workshops with AI-generated video introductions to illustrate the ease with which it’s possible to obtain a convincing deepfake — and to show what the technology can do.

Jason Eiseman, Director of Library Technology and Planning, and Nor Ortiz, Technology and Research Librarian, offer regular workshops on “Practical AI” to Law School faculty, staff, and students. 

Eiseman says that regulatory frameworks can’t match the pace of technological development for AI. “The industry is moving so much faster and so much further that there is no getting your arms around it,” he said. “But just because you’re playing catchup doesn’t mean you can’t also take a leadership role. For me, that’s going to mean taking a 30,000-foot view of where things are headed and trying to build education, services, and outreach around these technologies and tools.”

At one faculty workshop this spring, Eiseman and Ortiz outlined the difference between “artificial narrow intelligence” and “artificial general intelligence,” suggested AI tools for use in empirical research and transcription, and answered faculty questions on prompts.

Concerns around the use of AI in legal research often center around the possibility that AI will “hallucinate,” or generate false information. Ortiz said that while AI programs can make hallucinations very unlikely, the risk “cannot be brought down to zero.” That’s one reason why oversight is always important in a legal context, Ortiz and Eiseman cautioned. 

Since 2016, the Law School has hosted dozens of speaking events, colloquiums, and conferences on AI. This spring, ISP will host the “Propaganda and Emerging Technologies” conference, with speakers presenting on topics such as the role of generative AI in shaping political discourse, AI election harms, and topics in AI and democracy. A conference on “The Normative Philosophy of Computing,” to be held in fall 2024, is in the planning stages.

AI prompts

Shapiro and his AI lab recycle student data, like this prompt text, to train models for use in legal clinics.

Shaping the Future of AI

Robert C. Post ’77 is Sterling Professor of Law and an expert in constitutional law. During a conversation with Dean Gerken on the Inside Yale Law School podcast this spring, Post noted that AI poses critical implications for the First Amendment. 

“The internet is going to be governed by AI — and the issue will be how you politically legitimate the operation of an AI. And that seems to me the fundamental legal question/political question here,” Post said. “How you begin to use AI in those fields of communication, which are now dominating the planet? We need to have the equivalence of governance. And right now, we don’t.”

In New Haven, faculty are thinking deeply about that question, and their research is shaping the future of AI, even as AI tools, increasingly, help them do the work.

Balkin’s notion of “digital information fiduciaries,” which he proposed in 2014, is often cited in discussions of AI governance. A fiduciary is a person or entity that has a relationship of trust with a beneficiary, and who manages something valuable on the beneficiary’s behalf. “The law should treat digital companies that collect and use end-user data according to fiduciary principles,” Balkin wrote in The Harvard Law Review in 2020. This would be one way to protect people rendered vulnerable to the asymmetries of power created by new digital technologies, he said.

Jason Eiseman leading a Practical AI faculty workshop
The real Jason Eiseman, left, co-led a a faculty workshop titled “Practical AI” this February, where he and Nor Ortiz presented an overview of AI tools designed for use in law and answered questions about the use of AI in legal research.

AI isn’t going anywhere; thus, proactive thinking about its implications at every level of legal education and law writ large is critical.

“A decade from now, there will be significant integration of AI into the scholarly agendas of our faculty and into the everyday life of the Law School,” Balkin predicted. “Conversely, what we do at YLS will affect AI as well. YLS scholars will likely be on the front lines of developing legal solutions for the regulation of AI and related technologies, as well as adapting older doctrines and legal structures to account for AI. They will pioneer new uses of AI in legal research, using AI to ask new kinds of questions.”

They’ll also create new ways of using AI in legal education, he added.

At ISP, fellows are devoted to many areas of technology. Some are focused on algorithmic decision-making; some are focused on social media and platforms. Over time, Balkin said, the two categories have merged, meaning that almost all ISP fellows are working on AI to a greater or lesser degree, and their scholarship is helping push the field forward.

In the summer of 2025, with support from The Tsai Leadership Program, ISP will welcome a cohort of postgraduate fellows with a special focus on AI.

For Shapiro, taking a hands-on approach to the technology is the best way to comprehend how it will intersect with law. Shapiro hopes that by next year he and Piskac will teach a new class called “Law and Large Language Models,” in which students will compete to build the best models — issue-spotting models, for example, or ones that write briefs and memos. 

“Imagine if you went to law school and everyone said, ‘law is important,’ but didn’t teach you anything about law,” he said. “The situation feels similar for AI. We’re talking about it and not actually doing it. What we offer is the opportunity to actually build AI-based tools for legal reasoning. Our research and the new lab is designed to see how we can leverage this new and extremely exciting technology to help fill gaps in access to legal services and prepare students for the transformation of legal practice.”

As Shapiro goes deeper down the rabbit hole of what AI will mean for law, and vice versa, his research is literally keeping him up at night. Not because he’s worried about AI — but because he’s excited about what the future holds.

“It’s intoxicating,” he said. “My job is to teach, and I want to teach the models the best that I can, and teach the students how to do it. That’s about as good as it gets.”