Musk, Wozniak, and other tech luminaries plead with AI pioneers to hit the brakes

General
Musk, Wozniak, and other tech luminaries plead with AI pioneers to hit the brakes

Elon Musk, Apple co-founder Steve Wozniak, and the CEO of Stability AI have signed an open letter calling for a six-month "pause" in the development of advanced artificial intelligence systems, including ChatGPT.

The letter was released by the nonprofit Future of Life Institute (opens in new tab) and, at the time of writing, has been signed by more than 1,100 individuals from the academic and technical communities. [AI systems with intelligence that competes with humans could pose significant risks to society and humanity. In recent months, AI laboratories have been in an uncontrollable race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control."

As a result, the letter argues that the industry should think twice before developing anything more powerful than GPT-4. If that is not done voluntarily, then the government should intervene.

"We call on all AI laboratories to immediately suspend training on AI systems more powerful than GPT-4 for at least six months. This pause should be public and verifiable and should include all key stakeholders. If such a moratorium cannot be implemented quickly, the government should intervene and implement a moratorium.

Among the key signatories are Emad Mostaque, founder and CEO of Stability AI, which developed the Stable Diffusion text-to-image generation model; Evan Sharp, co-founder of Pinterest; cryptocurrency company Ripple's co-founder Chris Larson, deep learning Yoshua Bengio, and Connor Leahy, CEO of the AI lab Conjecture.

Of course, a cynic might point out that many of the signatories may just want time to catch up with the competition. But many others have nothing to gain immediately.

Moreover, there is no denying that in recent months there has been an explosion with respect to GPT-style large-scale AI models: to take Open AI's GPT model as an example, the first GPT-3 model, as seen in ChatGPT, was limited to text input and output. GPT-4, however, is multimodal and supports text, audio, video, and images.

New plug-ins were soon developed to give the models "eyes" and "ears," allowing AI models to send emails, execute codes, and perform real-world actions such as booking airline tickets through Internet access.

GPT-4's reasoning capabilities have also advanced significantly from ChatGPT and GPT-3. For example, the ChatGPT scores in the bottom 10% of U.S. law exams, whereas the GPT-4 scores in the top 90% (opens in new tab).

GPT-4 is currently being used to create a Google Chrome extension and an iPhone app, the latter of which was created from scratch and is now available in the official app store (opens in new tab).GPT-4 is also a basic 3D game similar to the original Doom engine (opens in new tab), which has been successfully coded; some have even had the GPT model create and implement an investment strategy.

Thus, before raising the question of what happens when GPT models become sentient, it is not difficult to understand how rapidly GPT models can have a significant impact on the real world. In that regard, a paper written by the creators of GPT-4 (opens in new tab) raises the concern that the model itself could develop and pursue undesirable or even dangerous goals. [Agentic in this context is not intended to humanize the language model, nor does it refer to the senses, but rather to the ability to, for example, achieve goals that are not specifically identified and may not appear in training, to focus on achieving specific and quantifiable goals, to develop long-term planning, and so on, refers to a system that is characterized by the ability to Such emergent behaviors have already been identified in models," the paper states.

On the other hand, according to Microsoft, the latest GPT-4 models are indeed showing a "spark" of artificial intelligence (open in new tab). In any case, the main issue here is that these models are being unleashed on a massive scale on the general public, from Bing searches to email and office tools.

That may all be fine. But equally, the potential for unintended consequences seems almost limitless at present. And that is a bit frightening, with the usual proviso that we welcome whatever new rulers may emerge. For the record.

.

Categories