A Stanford professor who served as an expert witness in a federal court case about fakes generated by artificial intelligence filed an affidavit containing false information that was likely made up by him an AI chatbotindicates a legal procedure.
The statement, filed by Jeff Hancock, professor of communications and founding director of the Stanford Social Media Lab, “cites a study that does not exist,” the plaintiffs’ Nov. 16 statement of claim says. “The study was likely a ‘hallucination’ generated by a large AI language model such as ChatGPT.”
Hancock and Stanford did not immediately respond to requests for comment.
The lawsuit was filed in Minnesota District Court by a state lawmaker and a satirical YouTuber seeking a court order declaring a state law criminalizing election-related issues unconstitutional. AI-generated “deepfake” photos, videos and sounds.
According to court documents Saturday, Hancock was called as an expert witness by the Minnesota attorney general, the defendant in the case.
The lawmaker and YouTuber’s lawsuit questioned Hancock’s reliability as an expert and argued that his report should be thrown out because it could contain other undiscovered AI inventions.
In his 12-page submission to the court, Hancock said he was examining “the impact of social media and artificial intelligence on misinformation and trust.”
Along with Hancock’s report was his list of “cited references,” court records show. One of those references – to a study by authors Huang, Zhang and Wang – caught the attention of state representative Mary Franson’s lawyers and fellow YouTuber Christopher Kohls Lawsuit against California Attorney General Rob Bonta on a law that allows lawsuits for damages for election deepfakes.
Hancock cited the study, reportedly published in the Journal of Information Technology & Politics, to support a point he made in his court filing about the sophistication of deepfake technology. The publication is real. But the study is “imaginary,” says the filing from Franson and Kohls’ lawyers.
The magazine’s volume and the article pages cited by Hancock do not address deepfakes, but do address online discussions by presidential candidates about climate change and the impact of social media posts on election results, the document says.
-
Apple is preparing Siri with more conversational capabilities to keep pace with AI
-
Silicon Valley’s tech boom is creating a bleak view of California’s budget
-
US brings allies together to talk about AI security. Trump’s promise to reverse Biden’s AI policies overshadows his work
-
AI is everywhere. How should California schools handle this?
-
Magid: Generative AI is becoming increasingly intelligent
Such a quote, with a plausible title and an alleged publication in a real journal, “is characteristic of an artificial intelligence ‘hallucination’ that academic researchers have warned their colleagues about,” the lawsuit says.
Hancock stated under penalty of perjury that in his expert presentation he “identified the academic, scientific and other materials that were referenced,” the document said.
The lawsuit raised the possibility that the alleged AI falsehood was inserted by the defendants’ legal team, but added: “Hancock would nevertheless have filed a statement falsely claiming that he had reviewed the material cited.”
Last year, attorneys Steven A. Schwartz and Peter LoDuca were each fined $5,000 in federal court in New York for filing a personal injury lawsuit that included false prior court filings that ChatGPT had fabricated to support its arguments .
“I didn’t understand that ChatGPT could fabricate cases,” Schwartz told the judge.
What specific ethical guidelines and regulations should be implemented to ensure transparency and accountability in the development and deployment of AI tools within the legal system?
## AI in the Courtroom: An Interview
Welcome to World Today News. Today, we’re delving into a fascinating and timely issue: the use of artificial intelligence in legal proceedings. Joining us are two esteemed guests: Dr. Emily Carter, a legal scholar specializing in technology and law, and Robert Lee, a leading AI ethics researcher.
**Part 1: The Case at Hand**
* **Host:** Dr. Carter, let’s start with the specific case highlighted in the article. A Stanford professor, Jeff Hancock, allegedly used a non-existent study, potentially generated by an AI chatbot, in a court affidavit. What are your initial thoughts on this development?
* **Dr. Carter:** This case is truly troubling. It raises numerous concerns about the reliability of information presented in court and the ethical implications of using AI tools. Can you elaborate on the potential consequences of relying on AI-generated information in legal proceedings, Robert?
* **Robert Lee:** Absolutely. This situation highlights the dangers of “AI hallucinations”— instances where AI models produce seemingly plausible but completely fabricated information. In a legal context, these hallucinations can have serious repercussions, potentially leading to miscarriages of justice.
**Part 2: The Broader Implications**
* **Host:** This case is certainly alarming, but is it an isolated incident? Are we seeing a broader trend of AI being used in courtrooms?
* **Robert Lee:** While this specific case is making headlines, the use of AI in legal settings is indeed growing. Lawyers are using AI for legal research, document review, and even predicting case outcomes. While these applications can be useful, we need to be extremely cautious about their limitations and potential biases.
* **Dr. Carter:** I agree. The ethical implications are crucial. How can we ensure that AI tools are used responsibly and ethically in the legal system?
* **Host:** What safeguards should be put in place to prevent situations like the one involving Professor Hancock from happening again?
**Part 3: The Future of AI and Law**
* **Host:** Looking ahead, what do you think the future holds for the intersection of AI and law?
* **Robert Lee:** I believe AI will continue to play an increasingly prominent role in the legal field. However, we need a robust framework of regulations and ethical guidelines to govern its use.
* **Dr. Carter:** Transparency and accountability are paramount. We need to be able to understand how AI algorithms reach their conclusions and who is responsible for their decisions.
* **Host:** This is a complex and evolving landscape. Dr. Carter, Robert Lee, thank you both for sharing your insights on this important topic.
**Host:** To our viewers, we encourage you to continue this critical conversation. Share your thoughts and concerns about the use of AI in the legal system.