In an interview on The Rest is Politics co-hosts Alastair Campbell and Laurie Stewart's podcast Leading, Bill Gates, co-founder of Microsoft and the Gates Foundation, discusses his perception of the dangers of AI.
"The key thing is that the good guys have better AI than the bad guys," Gates says.
"The problem is not that AI gets out of control, but that AI by people with bad intentions gets more powerful."
"Take cyber defense as an example. If the cyber defense AI of the good guys is as good or better than the cyber attack AI of the bad guys, that is a good situation. And you are not going to stop the development of AI worldwide. Some might argue that we should build a global army to invade computer labs, but not many would push for that. And so an increasingly powerful AI will be created.
Gates does not use this analogy to name and shame who is bad or good, but he does point out that Russia instigated the attack on Ukraine. Otherwise, Gates hopes most countries will cooperate wisely with AI.
"Hopefully, most countries will see this stuff properly shaped," Gates says.
However, Gates also states in this episode that individual countries are rarely involved in shaping AI. This is because the government market for AI is much, much smaller than the corporate or consumer market, Gates says.
Previous watershed technologies, including the creation of the microprocessor, which was a major driver of Gates' early success at Microsoft, were heavily funded by governments in their infancy.
"The challenge is that the government is not the early people who funded it [AI]. In fact, if you go back 20 or 30 years ago, government research funding was used, but now it's being used by Google, Microsoft, and others. The research and development costs are enormous."
Gates still works closely with Microsoft, which has partnered with OpenAI, the market leader in AI software. Gates is also in talks with that group, led by Sam Altman. As these companies rush toward the future of AI, it is perhaps not surprising that Gates comes across as a wary admirer of the potential of artificial intelligence; he understands the impact AI will have on the market, and in some ways it already has, but he is keen to point out that it must be shaped to avert danger He points out that "the market is already in a state of flux, but it must be shaped to avoid danger.
"Whenever innovation happens, it is in a sense neutral, and can end up empowering only the rich, or it can have unforeseen negative side effects... In the case of AI, the bad guys could use it as a tool for cyber attacks or to design bioterror weapons... In the case of AI, the bad guys could use it as a tool for cyber attacks or to design bioterror weapons. So we need to shape this to do good things, such as tutoring for children who need to learn, better medical advice, etc., and have a good AI that performs cyber defense.
Gates concludes, "I share all the concerns, but I also think there are very large positive aspects."
When talking with people about the future of AI, there is one topic that frequently comes up. It is the fear that AI systems will run amok and destroy the human race. Perhaps the popular media, which has been around for decades now, perpetuates this idea, and we are just seeing something inherently dangerous in artificial intelligence in general. But is it the AI that is to blame, or the humans who manipulate it?
I believe there is still a degree of uncertainty about the safety of what we call artificial intelligence. That is, there is a possibility that AI systems will not behave in accordance with the actions of the humans who create or use them. However, an AI with all-around smarts like humans has not yet been created.
The full interview with Gates can be heard on the podcast "Leading," available on Spotify, Audible, Apple Podcasts, and elsewhere. If Gates not only talks about AI, but is suspected of tracking by conspiracy theorists and getting yelled at in the street, it's worth a listen.
.
Comments