OpenAI is being sued by a Georgia radio host because ChatGPT accused him of a crime he did not commit.

General
OpenAI is being sued by a Georgia radio host because ChatGPT accused him of a crime he did not commit.

In April, Australian politician Brian Hood sued ChatGPT's OpenAI company, claiming that the chatbot misidentified him as a criminal. Now in the U.S., the company is being sued again on similar grounds: ChatGPT has recognized that radio host Mark Walters is accused of embezzling more than $5 million from a nonprofit organization called the Second Amendment Foundation.

According to the lawsuit (via The Verge), a journalist named Fred Riehl asked ChatGPT about another lawsuit he was covering, The Second Amendment Foundation v. Robert Ferguson, ChatGPT, for a request to provide a summary of the complaint after alleging that Walters "diverted funds to personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to make accurate and timely financial reports and disclosures to SAF's leadership." He stated that the complaint was filed against Walters.

But none of that is true. No such charges were made, and Walters' name is not mentioned in the lawsuit at all. However, when Reel requested certain portions of the lawsuit concerning Walters, ChatGPT provided them. According to Walters' complaint, the problem is that the whole thing is a "complete fabrication" that bears no resemblance to the actual Second Amenment Foundation lawsuit.

The good news for Walters is that none of the content provided by ChatGPT to Reel was ever made public. It is not clear if this was some sort of test or if Riehl simply felt that something was fishy, but he did contact one of the Second Amendment Foundation plaintiffs, who confirmed that Walters had nothing to do with the case. However, even though Riehl did not disclose it (it is unclear how Walters subsequently learned of it), Walters' lawsuit states that by providing him with false allegations, "OAI published defamatory matters concerning Walters."

The Verters' lawsuit states.

As The Verge explains, Section 230 of the Communications Decency Act generally protects Internet companies from being held liable for third-party content hosted on their platforms. The simple fact is that you cannot sue Reddit for messages someone posts on Reddit: they rely on external links for information, but they also use their own system to generate "new" information that is ultimately provided to users. ChatGPT and OpenAI may therefore be excluded from the protection of Article 230.

But it may not matter; UCLA law professor Eugene Volokh told Reason.com that he believes that defamation cases regarding AI are "legally viable in principle," but that Walters did not give OpenAI an opportunity to correct the record and stop making false statements about him He writes that this may not be the case, especially since it appears that Walters did not give OpenAI the opportunity to correct the record and stop making false statements about him, and there were no actual damages awarded. In other words, while it is likely that someday some AI company will take a beating in court because its chatbot spun a bullshit story and put a real person in a bind, this case may not be the case We have asked OpenAI for comment and will update if we hear back.

Categories