US Lawyers Fined $5,000 for Using AI Tool to Invent Legal Precedents
by EFE
Two lawyers in the United States have been ordered to pay a $5,000 fine for filing a court brief that included non-existent legal precedents, which were generated by the popular artificial intelligence (AI) tool ChatGPT.
Judge Kevin Castel ruled that lawyers Steven Schwartz and Peter LoDuca “knowingly ignored” the signs that the cases cited by ChatGPT were false and provided “misleading” statements to the court, indicating that they acted in bad faith.
While Judge Castel acknowledged that there is nothing inherently wrong with using a reliable AI tool as an assistant, he emphasized that lawyers have a responsibility to ensure the accuracy of their statements.
The ruling highlighted that both lawyers “abdicated their responsibilities” by submitting non-existent court opinions accompanied by fabricated quotes generated by ChatGPT. They continued to maintain these false opinions even after court orders raised doubts about their existence.
Schwartz and LoDuca were working on a lawsuit against airline Avianca, filed by a passenger who claimed to have suffered an injury from a service cart during a flight.
Schwartz, representing the plaintiff, used ChatGPT to write a brief opposing a defense request to dismiss the case. In the ten-page document, he cited several judicial decisions to support his arguments, but it was later discovered that these cases were invented by the AI tool.
“The Court is facing an unprecedented situation. A filing submitted by plaintiff’s counsel in opposition to a motion to dismiss the case is replete with citations to non-existent cases,” wrote Judge Kevin Castel at the time.
Schwartz himself submitted an affidavit admitting to using ChatGPT to prepare the brief and acknowledged that his only verification was asking the application if the cited cases were real.
Schwartz defended himself by claiming that he had never used such a tool before and was unaware of the possibility that its content could be false.
The lawyer emphasized that he had no intention of misleading the court and fully exonerated another lawyer from the firm who may also face sanctions.
What was the lawyer’s defense regarding his use of ChatGPT and how does he claim it affected his intentions when submitting the fabricated legal precedents to the court
US Lawyers Slapped with $5,000 Fine for Fabricated Legal Precedents
Two lawyers in the United States have been penalized with a $5,000 fine after submitting a court brief that included fictitious legal precedents concocted by the popular artificial intelligence (AI) tool, ChatGPT.
According to Judge Kevin Castel, attorneys Steven Schwartz and Peter LoDuca consciously disregarded the warning signs that the cases provided by ChatGPT were spurious, and they misleadingly presented these false statements to the court, indicating bad faith on their part.
While acknowledging that using a dependable AI tool as an assistant is not inherently wrong, Judge Castel stressed that lawyers bear the responsibility of verifying the accuracy of their statements.
The ruling highlighted the failure of both lawyers to fulfill their obligations, as they submitted non-existent court opinions supported by fabricated quotes generated by ChatGPT. Even after court orders cast doubt on the existence of these opinions, they continued to assert their validity.
Schwartz and LoDuca were handling a lawsuit against Avianca, an airline, lodged by a passenger who claimed to have sustained an injury from a service cart during a flight.
Schwartz, representing the plaintiff, employed ChatGPT to draft a brief countering a defense motion to dismiss the case. In the ten-page document, he cited numerous judicial decisions to bolster his arguments. Subsequently, it was discovered that these cases were fabrications created by the AI tool.
“The Court is confronted with an unprecedented situation. A filing submitted by plaintiff’s counsel in opposition to a motion to dismiss the case is riddled with citations to fictional cases,” wrote Judge Kevin Castel at the time.
Schwartz himself provided an affidavit admitting to using ChatGPT for preparing the brief and acknowledged that his sole verification consisted of questioning the application about the authenticity of the cited cases.
Schwartz defended himself, claiming that he had never used such a tool previously and was unaware that its content could be false.
The lawyer emphasized that he had no intention of deceiving the court and fully absolved another attorney from his firm who may also face penalties.
It’s disheartening to see lawyers resorting to dishonest practices, even with the help of AI tools. This act undermines the integrity of the legal profession and reinforces the need for strict ethical guidelines and oversight in the use of such technologies.
This article highlights a concerning misuse of AI technology in the legal field. Lawyers should be held to the highest ethical standards and using AI tools to submit false legal precedents is a blatant breach of trust. Proper oversight and penalties are necessary to ensure the integrity of our legal system.