AI will be a "transformative force for good," notes open letter signed by experts

General
AI will be a "transformative force for good," notes open letter signed by experts

By signing an open letter titled "AI open letter to UK Government and Industry" (via BBC), more than 1,300 signatories so far have made their opinion that AI is "a force for good, not a threat to humanity" made it clear that they believe AI is a "force for good, not a threat to humanity. While its main intent is to counter the current AI doomsday culture, the UK's reputation in particular has other motivations.

Organized by the British Chartered Institute of IT (BCS), the letter hopes to improve acceptance of the ever-pervasive technology and "unite the professional community of technologists behind a shared standard of technical and ethical practice in AI," as well as "Coded in Britain mark by pushing forward, it aims to one day be recognized as "the global synonym for high-quality, ethical, and inclusive AI." [AI is not an existential crisis for humanity, but a transformative force for good if we get the key decisions about its development and use right," the letter states.

"The UK can play a leading role in setting professional and technical standards in the role of AI, supported by a strong code of conduct, international cooperation, and well-resourced regulation."

And that's pretty much it. The open letter is surprisingly, almost ridiculously short, and offers nothing concrete to support its claims about the benefits of artificial intelligence. Frankly, it reads like a memo.

The movement on AI is active and the pros and cons are raging. Such letters are popping up all over the place. For example, one calling for a halt to AI development, signed by Elon Musk, was soon followed by a craft in the form of his own AI.

There is so much controversy that it is a painstaking task to wade through it all, but at least it means that there is serious debate over AI and ethics, and ethics in general.

Signatories currently supporting the open letter include James H. Davenport, global AI ethics and regulation leader at EY Global Public Policy, and Luciano Floridi, professor of information philosophy and ethics at Oxford University. Floridi has a great interest in the ethics of AI and co-authored a 2018 paper titled "How AI can be a force for good."

It is important to remember that there are positive aspects to AI advances. For example, that not all AI voice cloning is bad.

Personally, I believe that it is the hands that code and create artificial intelligence programs that should be held responsible for the horrors that come from them. At the very least, a letter like this, signed by big AI egoheads, might make it easier for policymakers to focus on AI regulation to cover such things, rather than reacting simply out of disgust.

Categories