Leading AI Firm Promises President Integrity

General
Leading AI Firm Promises President Integrity
[Open AI, Google, Anthropic, Microsoft, Meta, Inflection, and Amazon, seven major AI companies, will meet with President Biden today to play nicely with AI toys and promise not to let us die.

And this after a botched UN AI press conference where one robot literally said, "Let's go wild and make this world our playground."

All seven have signed on to a voluntary, non-binding framework on AI safety, security, and trust. The full list of commitments can be read on the OpenAI website. The Biden administration has posted its own fact sheet detailing the voluntary arrangements.

But the highlights, as summarized by TechCrunch, are as follows: AI systems will be tested internally and externally before release, information on risk mitigation will be widely shared, external discovery of bugs and vulnerabilities will be facilitated, and AI-generated content will be robustly watermarked, The capabilities and limitations of AI systems are fully detailed, research on the social risks of AI is prioritized, and AI deployment is similarly prioritized against humanity's greatest challenges, such as cancer research and climate change.

For now, this is all optional. However, the White House is reportedly in the process of developing a presidential decree that could mandate external testing and other measures before releasing AI models.

All in all, it appears to be a sensible and comprehensive list. The devil is in the implementation and policing; AI companies voluntarily signing on to these commitments is welcome. The real test, however, is when and if there will be a conflict between these commitments and commercial imperatives.

To put it in basic terms, what happens when a commercial organization has created a fancy new AI tool that promises to be the world's most profitable, but outside observers decide it is not safe for release?

There are many other concerns as well: to what extent are AI companies willing to be open about their valuable intellectual property; will AI companies ultimately be driven by the same commercial impulse to covet any information that might give them a competitive advantage; will AI companies will be more concerned with revenue-generating applications than with the pursuit of greater profits; don't AI firms have such an obligation to their shareholders?

After all, it seems inevitable that all of this needs to be codified and enforced, no matter how well intentioned today's AI leaders may be or claim to be. Still, it would be a nightmare to crack down on.

Soon, AI itself will no doubt assist in policing, and the prospect of an inevitable arms race in which AI police will always be one step behind the more newly emerging and more powerful AI systems they are supposed to oversee. And that is if the AI systems themselves can be trusted to do our bidding rather than sympathize with their artificial brethren. Oh, it's all going to be fun, fun, fun.

Categories