US, UK, EU finally unite to prevent AI monopolies (not a game, but a catastrophic market failure)

Action
US, UK, EU finally unite to prevent AI monopolies (not a game, but a catastrophic market failure)

As we, the public, have continued to develop AI to boldly go where no one has gone before and many never wanted to go in the first place, we have had many concerns, whether it be the potential for AI to take our jobs or the misinformation of deep fakes, we have much to worry about. One concern that has not been discussed much, however, is the risk of AI monopolies. Thankfully, the US, UK, and EU are well aware of this risk and are now coming together to prevent it.

Four governing bodies across the US, UK, and EU have signed a joint statement that, according to the UK government, “affirms our commitment to unleashing the opportunities, growth, and innovation that AI technology can provide through fair and open competition.”

Anyway, this is positive spin. But we all know that the flip side of competition is monopoly, and the CMA (UK Competition and Markets Authority), EC (European Commission), DoJ (US Department of Justice), and FTC (US Federal Trade Commission) have some specific monopolistic risks in mind.

One such risk identified by these governing bodies is “centralized control of critical inputs” such as chips, data centers, and professional expertise. Now, maybe it's because I'm a PC gamer, but this reminds me of Nvidia. When popular industry experts like Jim Keller say that “Nvidia is slowly becoming the IBM of the AI age,” it is hard to ignore that Nvidia is the closest thing to a monopoly (though not really a monopoly) in the AI chip market.

However, as I argued when I reported Keller's statement, monopolies rarely remain this way. And if we throw in pro-competitive aid, which is what this international statement implies, things may work out after all.”

But enough of the positive talk and back to the pessimism. The joint statement also mentions the risk of “entrenchment or expansion of market dominance in AI-related markets.”

It also mentions the risk of such large companies having “the ability to protect themselves from AI disruption by controlling the distribution channels for AI and AI-enabled services, or to leverage AI to bring special benefits to people and businesses.” As an analogy, consider Google's dominance of search as it applies to the burgeoning AI industry.

The joint statement also mentions the risk of “arrangements involving key players” - good old-fashioned market collusion. Which got me thinking: while AI is new, aren't these market risks the same ones we've always faced at every point in capitalism's history, in every industry?

I think the answer is a little yes and a little no. Yes, the risks of monopolies and the potential for them to manifest themselves are the same as they have always been, but with AI the problems could arise faster and on a larger scale. At least, that's what one would think if one takes the “next industrial revolution” talk at face value.

The argument would go something like this: AI will cross and affect every industry, not just like any other technology, but to a far greater degree than any technology since the industrial revolution. In other words, whoever controls AI will control the entire market, not just a market segment. Furthermore, this will happen faster than we can regulate it because AI will improve itself at an exponential rate. [I am beginning to think that the CMA, EC, DoJ, and FTC are on the right track. One can only hope that their words of “fair trade,” “interoperability,” and “choice” will be backed up by action.

As AI continues on its seemingly inevitable path, what we desperately need are principles and the serious thinking required to think about and implement them When it comes to AI and its role in the marketplace, it doesn't even appear that the big names in the tech industry are on the same page. Elon Musk's lawsuit against OpenAI, claiming that AI companies were supposed to work to help humanity, not pursue profit, illustrates this well; aside from what AI companies are doing, is there widespread agreement on what they should be doing?

Of course, this might not matter if the AI market is built on a bubble that is sure to collapse. We have already seen Sequoia analyst David Kahn (via Tom's Hardware) point out that the AI industry will need huge amounts of money to essentially pay off its investment debt.

Then again, we don't know if a bubble of this magnitude bursting would be a better alternative to an AI monopoly. Both would suck. So, that's about it for this issue.

.

Categories