xi's moments
Home | China-US

Talk of working on AI together questioned

By HENG WEILI in New York | chinadaily.com.cn | Updated: 2024-01-29 10:56

While a White House official recently said that the US was willing to cooperate with China on artificial intelligence, an American expert is skeptical that US technology policies are conducive to such cooperation.

Arati Prabhakar, director of the White House Office of Science and Technology Policy, told the Financial Times of London in an interview published Thursday that despite the two nations' trade tensions, particularly over sensitive technology, they could work together to "lessen [the] risks and assess [the] capabilities" of AI.

"Steps have been taken to engage in that process," Prabhakar said of collaborating with China on AI. "We have to try to work [with Beijing]."

"We are at a moment where everyone understands that AI is the most powerful technology … every country is bracing to use it to build a future that reflects their values," said Prabhakar. "But I think the one place we can all really agree is we want to have a technology base that is safe and effective."

Sourabh Gupta, a senior fellow at the Institute for China-America Studies, is skeptical about how such cooperation on AI would unfold.

"The US' desire to work on AI safety policy with China and compete vigorously on AI hardware, including chips, against China, are proceeding on entirely separate tracks," he said.

"The scope for trade-offs is minimal and probably non-existent. As such, the policy conversation between the two will gravitate towards a lowest common denominator approach on preventing fundamental AI-related harms, especially in the military sphere," he said.

"On the other hand, the AI hardware and software innovation and development side will see bitter competition between the two sides, with the US using its technology controls repeatedly to undercut China's progress in this area," Gupta predicted.

The White House issued an executive order in August 2023 that restricted US investments in Chinese technologies or products, stating that "countries of concern are engaged in comprehensive, long-term strategies that direct, facilitate, or otherwise support advancements in sensitive technologies and products that are critical to such countries' military, intelligence, surveillance, or cyber-enabled capabilities".

China, along with the US and more than two dozen other countries, signed the Bletchley Declaration on standards for AI at the world's first AI Safety Summit in the UK in November.

At the conclusion of the Nov 1-2 summit, Elon Musk thanked British Prime Minister Rishi Sunak for inviting China, saying, "If they're not participants, it's pointless."

Prabhakar told the FT that while the US may disagree with China on how to approach AI regulation, "there will also be places where we can agree", including on global technical and safety standards for software.

Gupta said that he was "afraid there will not be complementary cooperation. As the two sides roll out their respective governing and regulatory frameworks, though, both will have the opportunity to learn from the other sides' successes and mistakes.

"I would also submit that China's guidance on the development of AI is more encompassing than just content control," he said in reference to the FT article, which suggested that China was more concerned about regulation of domestic AI information while the US was focused on national security and consumer privacy.

Still, he said, "there is much for each side to learn by observing the development of the industry and its regulation on the counterpart's soil".

China's AI industry is expected to accelerate over the next decade, with its market value reaching 1.73 trillion yuan ($241.3 billion) by 2035, according to research firm CCID Consulting.

Prabhakar said that the US "did not intend to slow down AI development, but to maintain oversight of the technology".

"We are starting to have a global understanding that the tools to assess AI models — to understand how effective, how safe and trustworthy they are — are very weak today," she told the FT.

On Jan 15, at an Axios forum on the sidelines of the recent World Economic Forum in Davos, Switzerland, Prabhakar discussed the social influences of AI technology.

"When we talk about artificial intelligence, we tend to talk about it as a technology. But the first thing to realize is that people choose what AI models to build," she said.

"Often it's data that's about or created by human beings, and then they choose what applications to build, and then other people choose how to use those AI models and what to do with them," Prabhakar said.

"So I think if we're going to get to this future which we have to get to with better AI, we have to start by understanding that it's a socio-technical system; it's not just a technology by itself," she said.

Global Edition
BACK TO THE TOP
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349