China is narrowing the artificial intelligence (AI) gap with the US through rapid progress in deploying applications and state-backed adoption of the technology, despite the lack of access to advanced chips, according to industry experts and analysts.
Chinese tech firms have rushed to create their own large language models (LLMs) – the underlying technology behind generative AI technologies like ChatGPT – with many even claiming to match or exceed their US counterparts, all amid tighter US restrictions on advanced chips considered critical to the training of AI systems.
“It’s an emerging trend that the lack of [advanced] graphic processing units in China, amid US restrictions on exports, results in the drive and push for efficiency in AI in China,” according to Winston Ma, author of the book Digital War – How China’s Tech Power Shapes the Future of AI, Blockchain and Cyberspace.
In one example, Shengshu AI, a little-known start-up based in Beijing, launched its text-to-video tool this week, becoming the latest local firm to offer a Sora-style service for unlimited public use, after Kuaishou and Zhipu AI. The tool, called Vidu, is able to generate clips from Chinese and English text prompts.
While text-to-video was pioneered by Sora, the three Chinese tech firms have been able to put their AI video tools in the hands of global users. In comparison, San Francisco-based start-up OpenAI, which was the first to demonstrate the function, has yet to make its tools widely available.
Chinese firms are also contributing to global AI development by launching open source LLMs so anyone can build their own AI systems.
Alibaba Group Holding launched its Qwen2 open-source LLM family in June, which was ranked No 1 at the time by Hugging Face, a developer community for open-source AI models. Alibaba owns the South China Morning Post.
“Qwen 72B is the king and Chinese open models are dominating overall,” Hugging Face co-founder and CEO Clement Delangue said in a post on social media platform X.
Analysts partly attribute China’s rapid progress in AI to its ability to work around the chip restrictions to develop the intelligent computing power required to train local LLMs.
Since the US placed export restrictions on Nvidia’s A100 and H100 chips – considered the gold standard for training sophisticated AI systems – in mid-2022 and later included the less powerful A800 and H800 chips, Beijing and some of the nation’s tech champions have managed to build a large reservoir of intelligent computing power, thanks in part to locally developed solutions.
“Looking at the numbers, domestic computing power has been racking up rapidly, as many state-owned-enterprises and regional [governments] are tasked with [developing] intelligent computing power,” said Li Yangwei, a Beijing-based technical consultant working in the field.
Li added that chips developed by local firms, such as Huawei Technologies, have gained popularity, and that Huawei’s Ascend solution is China’s best shot in developing a home-grown AI infrastructure.
Huawei’s Ascend 910B AI chip has been found in some tests to deliver between 80 per cent and 120 per cent of the performance of Nvidia’s A100 when training LLMs, Wang Tao, chief operating officer of Jiangsu Kunpeng Ecosystem Innovation Centre, said on the sidelines of the Nanjing World Semiconductor Conference in June.
Zhang Yi, founder and chief analyst at technology consultancy iiMedia, said the buildout of computing resources infrastructure by the state has also helped alleviate anxiety over the lack of advanced chips.
When compared with other nations, China’s vast market size and public sector demand for AI and its applications are advantageous to further AI progress in the country, according to Zhang.
“China has a comprehensive and complete manufacturing industrial production system … [and] many entities are in dire need of improving their efficiency, so that’s where AI could be tapped,” he said.