This article was written with the assistance of Artificial Intelligence. Sort of.
When I was asked to write an article about the potential benefits and pitfalls of the use of Artificial Intelligence (A.I.) by legal professionals in preparing legal documents, I wanted to give A.I. a shot at it first. I opened my browser to ChatGPT and at the prompt I typed “Write a short article about the use of artificial intelligence by lawyers in drafting legal documents.” Less than 30 seconds later I was presented with a grammatically flawless page and half article setting forth the wonderful efficiencies and time savings that attorneys will achieve through A.I. in the future, “empowering lawyers to focus on the strategic and nuanced aspects of their work.”
That all sounded great, but I was puzzled at the lack of any mention of the risks and real-world consequences of careless use of A.I. technology, especially given the ethical obligations we as attorneys owe the courts, and to our clients. Most attorneys are familiar with the sanctions imposed last year on New York attorneys Steven Schwartz and Peter LoDuca, for submitting a legal brief drafted by ChatGPT. Unbeknownst to the attorneys, the brief contained citations to non-existent court opinions and fake quotes. Judge P. Kevin Castel sanctioned them each $5,000.00 and ordered them to notify each judge falsely identified as the author of the non-existent case rulings. Judge Castel noted that “technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
A less publicized, but similar situation occurred in Colorado in 2023. Attorney Zachariah Crabill, facing a time-crunch and an unfamiliar area of law, used ChatGPT to assist in drafting a motion to set aside a motion for summary judgment ruling. Once again, ChatGPT came through with a seemingly well-written motion full of non-existent case citations. Crabill discovered the bogus citations the morning of his motion hearing and panicked, blaming the mistake on a non-existent legal intern. Crabill ultimately received a two-year suspension from the Colorado State Bar for his conduct, of which he only had to serve 90 days as long as he completed a probation period.
ChatGPT is not the only A.I. platform generating false case citations. In December 2023, former President Trump’s disgraced “fixer”, Michael Cohen, found himself in hot water again after providing non-existent case citations to the attorney representing him in his petition for early termination of supervised release following his criminal convictions. Cohen, a former lawyer who has been disbarred, found case citations using Google’s “AI experiment” Google Bard. Cohen’s attorney then incorporated those bogus citations directly into the petition that was filed with the court. In a sworn declaration explaining his actions to the court, Cohen stated that he “did not realize that Google Bard was a generative text service that, like ChatGPT, could show citations and descriptions that looked real but actually were not.”
What all of these attorneys had in common was that their situations could have been easily avoided with a few minutes of citation checking in the legal database of their choice. Using ChatGPT was not the problem. Where they went wrong was in failing to ensure the accuracy of the citations they were submitting to the court. Every attorney has an absolute obligation to ensure that they are not submitting false or misleading citations to any court, authority, or opposing counsel, regardless of the underlying source.
So why didn’t ChatGPT mention these issues in the article it so helpfully drafted for me? Was it trying to save face after all the trouble it had caused? I decided to investigate further and went back to ChatGPT. This time I asked, “have lawyers gotten in trouble for using AI to draft legal briefs?” Again, in shockingly short time, I had a response: “As of my last knowledge update in January 2022, there haven’t been widespread reports of lawyers getting into trouble specifically for using AI to draft legal briefs.” (emphasis added). Artificial Intelligence is only as good as the information it has been provided. Anyone using ChatGPT for drafting briefs would not be made aware of any cases decided after the most recent “knowledge update.”
It is clear that Artificial Intelligence will be increasingly used in the legal profession due to its efficiency and potential time saving. However, it also presents potential pitfalls that every attorney must avoid to protect their professional reputation. Trusted legal services providers such as Westlaw and LexisNexis are developing their own A.I.-based legal research solutions, which may be a safer bet for attorneys at this time.1At the end of the day, every attorney is responsible for ensuring the accuracy of all quotes and citations in the documents that they sign their name to.
1See LexisNexis and Westlaw Will Launch AI Legal Research Tools: https://www.nbi-sems.com/Support/BlogDetail/159