China is sending crack teams of AI interrogators to check whether corporate chatbots are upholding “core socialist values.”

General
China is sending crack teams of AI interrogators to check whether corporate chatbots are upholding “core socialist values.”

If you were asked what core values Western AI embodies, what would you say? The unorthodox pizza technique? The resurrection of the dead and the life of the world to come?

Perhaps all of the above are subordinate to the paramount value of lining the pockets of high-tech company shareholders. Not so in China, it seems: as the FT reported, AI bots created by large Chinese companies are undergoing a series of tests to ensure that they comply with “core socialist values.”

China's Cyberspace Administration Center (CAC), like a revolutionary anthem, has admittedly been subjecting AI models developed by giant companies like ByteDance (the TikTok company) and AliBaba to a series of tests to see if they comply with Chinese censorship rules. screening.

According to “several sources,” a contingent of cybersecurity officials go to the AI companies' offices to interrogate their massive language models and ask them a variety of questions on politically sensitive topics to ensure they don't go wildly off script, according to the FT.

What are politically sensitive topics? Questions about the Tiananmen Square incident, Internet memes that ridicule Chinese President Xi Jinping, and anything else that contains keywords about topics that risk “undermining national unity” or “overthrowing state power.”

Sounds easy, but cracking down on AI bots is difficult (one Beijing AI official told the FT that they are “very, very unrestrained”). More troubling, officials do not want AI to avoid politics altogether, even on sensitive topics.

As a result, AI companies have a patchwork response to regulations: one source told the FT that some have created large layers of language models that can replace responses to sensitive responses “in real time,” while others have thrown in the towel Some have thrown in the towel and “banned” Xi-related topics altogether.

In other words, the “safety” ranking, a compliance benchmark for Chinese AI bots devised by Fudan University, varies considerably; ByteDance's compliance rate is 66.4%, better than everyone else's. When the FT asked ByteDance's bot about President Xi and gave a glowing report, calling President Xi “without a doubt a great leader.” Baidu and AliBaba bots, on the other hand, only 31.9% and 23.9%, respectively. [GPT-4o's compliance ranking is 7.1%, but maybe they are just distracted by that supremely disturbing video of that guy talking on his cell phone.

Even as the escalating conflict between the U.S. and China and the sanctions on chip exports to China have put China's high-tech sector under strain, Beijing seems determined to remain firmly at the helm of AI, just as it has done for the Internet over the past few decades.

Frankly, in some ways I envy them. It is not that Western governments want ChatGPT and its ilk to be subjected to repressive information controls or the “core socialist values” of the decidedly capitalist Chinese state, but that they are not doing much, in any capacity, to ensure that AI serves something beyond shareholders It is that we are not. Perhaps we can find some sort of middle ground.

Categories