Google Employee Reportedly Begs 'Pathological Liar' Not to Release AI Chatbot 'Bird'

General
Google Employee Reportedly Begs 'Pathological Liar' Not to Release AI Chatbot 'Bird'

According to internal documents viewed by Bloomberg (opens in new tab), several Google employees expressed concern that the company's AI chatbot Bard (opens in new tab) would not be ready for a March release. Two quotes about Bard purportedly from employees: "pathological liar" and "chilling."

Bard (open in new tab) is Google's answer to OpenAI's ChatGPT (open in new tab), which CEO Sundar Pichai "combines the power, intelligence, and creativity of our large language model with our extensive knowledge from around the world." He stated. However, Bloomberg reported that Google rushed out of the gate with Bird to compete with ChatGPT, which the company allegedly called a competitive "code red."

Bloomberg reports that one employee sent a message to an internal messaging group, which was viewed by 7,000 employees. Shortly before the launch, Jen Gennai, who heads Google's AI governance, reportedly reversed a risk assessment by her team that Bird's response could be harmful.

Bloomberg reported several examples: suggestions about landing a plane that could lead to a crash, or scuba diving instructions that could lead to "serious injury or death."

Meredith Whitaker, a former Google manager, told Bloomberg that "AI ethics are an afterthought" at the company.

ChatGPT has its own issues regarding the truth (opens in new tab) and the proper source of information to scour for answers. Currently, Google calls Bird an "experiment" and claims that, in Bloomberg's words, "responsible AI is a top priority. As an experiment, we asked Google Bird if its advice was potentially dangerous, and got the following answer: When we asked Google Bird if its advice was potentially dangerous, it replied: "My advice can be dangerous, especially when it comes to health and other sensitive issues. Especially when it comes to health and other sensitive topics. I am still developing and cannot always distinguish between good and bad advice". They were also told not to rely on advice for "important decisions" and that they "do not have the same level of understanding or knowledge as a human being."

Categories