An open letter signed by Musk calling for a moratorium on AI development has come under fire from the researchers cited.

General
An open letter signed by Musk calling for a moratorium on AI development has come under fire from the researchers cited.

Earlier this week, we reported on an open letter from the Future of Life Institute (FLI) calling for a six-month pause in training for AI systems "more powerful" than the recently released Chat GPT-4. The letter, signed by Elon Musk, Steve Wozniak, and Stability AI founder Emad Mostak, among others, was met with harsh criticism from the cited sources, according to the Guardian (opens in new tab).

"On the Dangers of Stochastic Parrots (opens in new tab)" is an influential paper that critiques the environmental costs and inherent biases of large-scale language models such as Chat GPT, which was cited in last week's open letter as a major source It is one of the Co-author Margaret Mitchell, formerly head of ethical AI research at Google, told Reuters that "by taking many questionable ideas for granted, the letter asserts priorities and narratives about AI that favor FLI advocates."

Mitchell continues, " Ignoring positive harm now is a privilege that some of us do not have."

University of Connecticut assistant professor Siri Dori-Hakochen, whose work was cited in the FLI letter, has similarly harsh words. She told Reuters that "AI can exacerbate these risks without ever reaching human-level intelligence," referring to existential challenges like climate change, adding, "Non-existential risks are really, really important, but they don't get Hollywood-level attention."

The Future of Life Institute received €3,531,696 ($4,177,996 at the time) from the Musk Foundation (open in new tab) in 2021. Meanwhile, Elon Musk himself co-founded Open AI, the creator of chat GPT, but left the company on unfavorable terms in 2018, as reported by Forbes (opens in new tab).According to a report by Vice (opens in new tab), among the signatories to the FLI letter were Meta s chief AI scientist, Yann LeCun, and Chinese President Xi Jinping, have been found to be fake.

On March 31, Mitchell, linguistics professor Emily M. Bender, computer scientist Timni Gebre, linguist Angelina McMillan-Major, and other authors of "On the Dangers of Probabilistic Parrots," through the ethical AI research organization DAIR, published FLI's open letter (opens in new tab) with a formal response to the letter. The letter states, "The harms caused by so-called AI are real and stem from the actions of humans and companies that deploy automated systems. Regulatory efforts should focus on transparency, accountability, and the prevention of exploitative labor practices."

While the researchers agree with some of the measures proposed by the FLI letter, "these measures are overshadowed by fear-mongering and AI hype, directing the debate toward the risks posed by "powerful digital brains" and "human-like intelligence":

The authors of Stochastic Parrot agree with the "long-termist" philosophical school that FLI has become so popular among Silicon Valley luminaries in recent years, an ideology that prioritizes the well-being of theoretical, far-future humans (said to be in the trillions) over real-life humans He points out that.

You may be familiar with these words from the ongoing saga of the collapsed crypto exchange FTC and its ousted leader, Sam Bankman Freed (open in new tab). He was an outspoken advocate of "effective altruism" for a future humanity that would have to deal with singularity and other issues. Why worry about climate change and global food supplies when we must ensure that the Dyson sphere of 5402 AD does not face a nanobot "grey goo (open in new tab)" apocalypse scenario?

The authors of Stochastic Parrot effectively summarize their argument near the end of their letter: "We must deal with the dramatic economic and political disruption (especially to democracy) that AI will cause.

Instead, the letter's authors argue, "we should create machines that work for us, rather than "adapting" society so that machines can read and write." "The current race toward increasingly large-scale 'AI experimentation' is a series of decisions driven by the profit motive rather than a predetermined path of how fast our only choice is to run."

.

Categories