By now, the story of two New York attorneys facing scrutiny for citing nonexistent cases generated by the artificial intelligence (“AI”) tool ChatGPT has made national (and international) headlines. Late last month, a federal judge in the Southern District of New York sanctioned the attorneys and their firm $5,000. The court’s decision (Roberto Mata v. Avianca, Inc., No. 22-cv-1461-PKC (S.D.N.Y. June 22, 2023) (ECF No. 54)) provides a humbling reminder of both an attorney’s responsibilities in ensuring the accuracy of his or her filings, and the limits of certain technologies in the legal profession.

The primary focus of the court’s decision was the conduct of the attorneys rather than an attack on the use of AI technology in general. As the court explained at the outset: “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.” Id. at 1. The court explained that the problem of the attorneys citing fake cases generated by ChatGPT was made worse because the attorneys “continued to stand by the fake opinions after judicial orders called their existence into question.” Id. Indeed, the court found that the attorneys acted in bad faith and violated their Rule 11 obligations by, among other things, continuing to advocate for the fake cases and legal arguments premised on those cases, even after being informed by their adversary and the court that the cases could not be located. Id. at 29.

The court’s decision not only reaffirms the duties attorneys have to the court, their adversaries, and their clients. Id. at 1-2. It also provides a real-world example of the current limits of AI in conducting legal research. The court described the fake decisions generated by ChatGPT in detail, stating that while they “have some traits that are superficially consistent with actual judicial decisions,” the legal analysis in one of them was akin to “gibberish,” the procedural history “borders on nonsensical,” and the decision “abruptly ends without a conclusion.” Id. at 10-11. When one of the attorneys who cited these fake cases questioned the AI tool about their legitimacy, the answer from ChatGPT maintained that the cases were all real and could be found in legal research databases like Westlaw and LexisNexis. Id. at 41-43. The AI tool was simply wrong. This unfortunate episode confirms that when it comes to legal research, analysis, and advocacy (all of which are obviously key aspects of an attorney’s job), there is still no substitute for human participation and involvement.

This is not to discount the widespread enthusiasm for AI generally in recent years. In fact, routine use of certain AI technology is now commonplace for many lawyers (think predictive coding in eDiscovery). ChatGPT, however, was launched less than a year ago. Yet already there has been talk of it someday making lawyers “obsolete.” That day has not yet arrived, as demonstrated above. Indeed, during the March 2023 Legalweek conference in New York, ChatGPT was reported to suffer from “hallucinations,” which means that “sometimes the technology ‘predicts’ facts that have no actual basis in reality.” In addition to these “hallucinations,” there is concern by some law firms that use of ChatGPT risks exposing confidential client information, with one law firm banning in-office use of ChatGPT entirely. In light of the serious concerns raised by the use of ChatGPT in the tool’s short existence, the likely path forward for many law firms will involve cautiously balancing the benefits that ChatGPT (and other AI tools) can offer, while taking steps to protect against the risks created by reliance on such tools.

Back to Commercial Litigation Update Blog

Search This Blog

Blog Editors

Authors

Related Services

Topics

Archives

Jump to Page

Subscribe

Sign up to receive an email notification when new Commercial Litigation Update posts are published:

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.