ChatGPT is so "ridiculous" that an Australian whistleblower is suing for defamation.

General
ChatGPT is so "ridiculous" that an Australian whistleblower is suing for defamation.

We know ChatGPT is doing something wrong. Very wrong. That is funny, but not very funny if ChatGPT misidentified you as a criminal. And even less funny if you were actually the first person to uncover the crime in question.

In fact, it is so laughable that I may decide to sue you for libel. That is exactly what happened to Brian Hood, a politician from Melbourne, Australia.

ChatGPT appears to have identified Hood (opens in new tab), currently mayor of a local municipality in northwest Melbourne, as one of the central perpetrators in the so-called Securrency Bribery Scandal (opens in new tab). The chatbot explains that Hood pleaded guilty in 2012 and was sentenced to 2.5 years in prison.

And when we ask ChatGPT today about the scandal, it still does.

Indeed, it was Hood who actually alerted authorities to bribery taking place at a banknote printing business called Securency, then a subsidiary of the Reserve Bank of Australia. At the time, Justice Elizabeth Hollingworth of the Supreme Court of Victoria praised Hood's "tremendous courage" in coming forward.

Hood lost her job after the whistleblowing and apparently suffered years of anxiety as the case dragged on.

When he learned of the disinformation being spread by ChatGPT, Hood said, "I was a little numb. I was just stunned because the information was so incorrect and so ridiculous. And I was pretty angry about it."

[14

Allegedly, Hood's attorneys have initiated an initial lawsuit against OpenAI for defamation, but have yet to hear back.

Given the errors that chatbots routinely make, it is easy to see how this could have happened. It is correct that Hood's name is associated with the crime. But ChatGPT, as anyone who has ever used a chatbot knows, too easily reverted it all back to before.

If you are asking me to explain the difference between USB 3.2 and USB 3.2x2, that is a bit annoying, but if you are spreading blatantly defamatory falsehoods about real, living people, that is rather more problematic.

This case will likely be the first to test the basic principle of whether AI chatbot creators can be held liable for what their chatbots create.

Hood's argument undoubtedly centers on the fact that ChatGPT is being deployed for widespread public use and that OpenAI's creators have explicitly stated that the model is experimental and error-prone.

Open AI would probably point out the disclaimer in the ChatGPT interface.

In any case, please add this to the long and growing list of new, interesting, and unintended problems that these new chatbots are creating. It's a huge amount of work and will probably require AI to clean it all up.

Categories