63% of Americans surveyed want government legislation to prevent ultra-intelligent AI from ever being achieved

General
63% of Americans surveyed want government legislation to prevent ultra-intelligent AI from ever being achieved

Generative AI may be in vogue now, but the jury is definitely unanimous when it comes to artificial intelligence systems that are far more capable than humans. A survey of American voters showed that 63% of respondents believe that government regulations should be put in place to prevent them from being actively achieved, let alone limited in some way.

A study conducted by YouGov for the Institute for Artificial Intelligence Policy (Via Vox) was conducted on May 9 last year. Although it only sampled a small number of voters in the United States — only 1,118 in total — the demographics covered were enough to fairly represent the wider voting population.

One of the specific questions asked in the survey was focused on "whether regulation should have a goal of delaying super intelligence."1"Specifically, we're talking about artificial general intelligence (AGI), which things like OpenAI and Google are actively trying to achieve. In the former case, its mission articulates this with the goal of "ensuring that artificial general intelligence benefits all humanity," a view shared by those who work in the field. Even if it is one of the co-founders of OpenAI on his way out the door...

Regardless of how honorable or otherwise OpenAI's intentions were, it is a message that is now lost to US voters. Of those surveyed, 63% agreed with the statement that regulation should aim to actively prevent AI's super intelligence, 21% felt unaware and 16% fully agreed.

The overall findings of the survey show that voters are significantly worried about "keeping dangerous [AI] models out of the hands of bad actors rather than benefiting all of us." It suggests that it is a good idea to use it as a tool. According to 67% of surveyed voters, research into new, more powerful AI models should be regulated and they should be limited to what they can do. Almost 70% of respondents felt that AI should be regulated like a "dangerous powerful technology.""

It's not that those people were not against learning about AI. When asked about Congress's proposal to expand access to AI education, research and training, 55% agreed with the idea and 24% opposed it. The rest chose the "not sure" response.

I think some of AGI's negative views are that the average person would definitely think of "Skynet" when asked about artificial intelligence more than humans.Even though it is a much more basic system, concerns over serious fakes and job losses see any of the pluses AI could potentially bring.

The results of this study are undoubtedly pleasing to the Artificial Intelligence Policy Institute, and I do not suggest that it has affected the results — i.e.

as my own, very unscientific, immediate friends and family surveys have yielded similar results - "Positive government regulation significantly reduces the destabilizing effects of AI." Regardless, OpenAI, Google, and others are clearly ahead of a lot of work in convincing voters that AGI is really beneficial to humanity. Because at the moment, the majority view that AI will become more powerful seems to be a completely negative view, despite the opposite argument.

Categories