Avoid fake citations: See how this Chat GPT login ended up in court
Just how much we can entrust artificial intelligence with? One New York-based lawyer found out just how wrong it could go when he relied on ChatGPT, OpenAI’s language model, to prepare a court filing. In an unexpected twist, it turns out the AI was more creative than he bargained for.
Steven Schwartz, representing a client in a lawsuit against Colombian airline Avianca, is facing a sanctions hearing after discovering that the precedents provided as evidence were pure AI fabrications. It’s the sort of revelation that shakes trust and poses serious questions about the reliability of AI in crucial roles.
How much does your own ChatGPT login say about you? Let’s take a look.
Gone rogue?
The phony cases, such as “Varghese v. China Southern Airlines Ltd.”, “Miller v. United Airlines Inc.”, and “Petersen v. Iran Air”, were submitted in good faith by Schwartz who was unaware of ChatGPT’s potential to create false content. Following the revelation, Schwartz admitted to his misstep, pledging never to use generative AI without thorough fact-checking in the future.
As lawyers from Avianca and U.S. District Judge P. Kevin Castel digested this novel situation, Schwartz expressed deep regret. Although another attorney from his firm argued that there was no intent to deceive, the lawyer may still have to cover the opposing side’s legal fees incurred due to the misrepresentation.
Double edged AI
It’s not the first time the AI has “hallucinated”, a term used by researchers to describe the phenomenon of AI generating false information. The Washington Post had reported a similar incident last month when ChatGPT falsely implicated a professor in sexual harassment allegations, citing a non-existent article from the Post.
These incidents have not gone unnoticed. Sam Altman, CEO of OpenAI, voiced his worst fears about the potential harm caused by advanced technology at a Senate hearing on artificial intelligence. In a rare move for a Silicon Valley leader, Altman called for government regulation to mitigate the risks posed by increasingly powerful AI models.
Risk vs. Revolution
As Altman stands before lawmakers and the public, he highlights the need for government intervention to put safety standards and guardrails in place. Yet, he maintains that the benefits of the deployed tools significantly outweigh the risks, and OpenAI will continue to release such technology.
Despite Altman’s optimistic perspective, this incident highlights how potentially harmful AI could be if left unchecked. It emphasizes the need for stringent measures to prevent misuse and to maintain trust. AI has undeniably revolutionized multiple industries, including education, entertainment, and now, the legal field.
But as we race forward, are we ready to grapple with the implications and possible consequences of AI’s creative prowess?
_
Do you think there is any going back toward a good and safe AI revolution? Or are we headed towards a Morpheus and Neo moment? Let us know in the comments!