China wants to require a security review of AI services before they’re released

Chinese AI bots looking to rival OpenAI's ChatGPT will need to study up on "socialist values"

We may earn a commission from links on this page.
People stand by a sign of Baidu during World Artificial Intelligence Conference, following the coronavirus disease (COVID-19) outbreak, in Shanghai.
China is looking to regulate the burgeoning generative AI industry.
Photo: Aly Song (Reuters)

Amid the flurry of Baidu and Alibaba announcing AI products, China has been quick to propose regulation over the burgeoning generative AI industry.

AI products developed in China must undergo a “security assessment” before being released to the public, according to the Cyberspace Administration of China (CAC), which has drafted new rules regarding the development of generative AI services. The goal is to ensure a “healthy development and standardized application” of generative AI technology, the proposal, open to the public for comment, notes.

Advertisement

The content generated by AI bots should “reflect the core values of socialism, and must not contain subversion of state power” in addition to not promoting terrorism, discrimination, and violence, among other things. The guidelines, released on Apr. 11, note companies must ensure that AI content is accurate, and measures should be taken to prevent the models from producing false information.

Advertisement

When it comes to data collection for the AI models, the data must not contain information that infringes intellectual property rights. If the data contains personal information, companies are expected to obtain the consent of the subject of the personal information or meet other circumstances required by law, the CAC writes.

Advertisement

The rules comes as big tech giants in China, in recent weeks, have rushed to unveil their generative AI products, which are trained on large datasets to produce new content. Baidu is testing its Ernie bot. This week, SenseTime, an AI company, released its AI bot SenseNova, while e-commerce giant Alibaba introduced Tongyi Qianwen, planning to integrate the AI bot across its products.

Those bots, though, are in test mode and not available yet to the public. It’s not clear the timeline of when they will be. As analysts noted to Bloomberg, the CAC rules will likely affect how AI models in China will be trained in the future.

Advertisement

The popularity of AI bots skyrocketed after San Francisco-based OpenAI launched ChatGPT just five months ago. AI chatbots have been used to draft emails and write essays, but there’s been growing concerns over generative AI models spitting out false and inaccurate information.

How will AI be regulated?

Countries around the world are looking to regulate the development of AI bots. Just last week, Italy temporarily banned ChatGPT, citing the processing of personal data as well as the bot’s tendency to generate inaccurate information. Meanwhile, in the US, the Department of Commerce has put out a formal public request, this week, for comment on whether AI models should undergo a certification process.

Advertisement

Companies like Google and Microsoft have been quick to say that their AI bots are not perfect, highlighting the ambiguous nature of generative AI. Some companies are open to regulation. “We believe that powerful AI systems should be subject to rigorous safety evaluations,” OpenAI’s website reads. “Regulation is needed to ensure that such practices are adopted, and we actively engage with governments on the best form such regulation could take.”

If companies fail to comply to the guidelines, China’s CAC writes, the AI services will be stopped. The company responsible for the technology could receive a fine of at least 10,000 yuan ($1,450) and may even face criminal investigations.