xi's moments
Home | Technology

Amid tech change, world must maximize AI's benefits, minimize its risks

By Stephen A.Schwarzman | China Daily | Updated: 2020-07-20 09:21

A visitor experiences artificial intelligence equipment at an exhibition in Yangzhou, Jiangsu province, on April 28. [Photo by Meng Delong/For China Daily]

I'm not a technologist. My education in artificial intelligence has come from speaking with the world's tech experts and entrepreneurs who, as I've come to learn, hold widely divergent views on whether AI will be a force for good, or whether it will disrupt society and life as we know it.

But this either/or debate of good versus evil misses the point by focusing on "what if?" scenarios rather than the steps we must take today to ensure that we responsibly harness the near limitless potential of AI. To maximize AI's benefits while minimizing its risks, we need to establish a global compact for the research, introduction and deployment of AI.

Today, countries are primarily pursuing an independent course in the development of AI without sufficient effort to align their activities. More worrisome is the idea that AI is a zero-sum game among nations, a posture which introduces substantial barriers to cooperation, potentially slowing the most beneficial technological advances. Although several multilateral organizations are working on frameworks for international cooperation-which is encouraging-these initiatives need to be prioritized and significantly fast-tracked.

There are many areas that lend themselves to global cooperation, including healthcare, education, and manufacturing. Areas that impact geopolitical stability, such as national security and autonomous weapons, warrant international collaboration and mutual regulation as well.

The benefits from cooperation in these areas are enormous. In healthcare, AI has the potential to address historical and global challenges such as rapidly increasing healthcare costs and inequities in access and treatment. In education, it can be used to improve classroom outcomes and help countries re-skill their workforces to mitigate the job dislocation that AI may cause in some industries. And in manufacturing, it can create safer, less wasteful working environments and make global supply chains more resilient to unplanned global disruptions.

The best practices of each country in these areas, and others, should be shared for the benefit of all. Sharing lessons learned in innovation can help inform more responsible AI development and adoption, thereby facilitating better international relations and a more peaceful, prosperous and equitable society.

Leading companies, universities and technologists must work with their government, and ultimately other governments, to develop standards, policy guidelines and regulations for this powerful technology. Similar approaches have been taken in the past with the development of protocols on nuclear and biological weapons, where agreement on global standards has made the world a safer place.

The good news is that the foundation for a model of international cooperation is already in place.

In recent years, organizations around the world have issued principles and guidelines for ethical AI development which serve as a good starting point. Although there are complicated regional and cultural differences in the interpretation and relative importance placed on certain issues, and ideas for implementation, there does seem to be global convergence around a few core principles: transparency, fairness, safety, responsibility, and privacy.

The first principle, transparency, is about creating AI systems where decisions are easily interpretable and explainable, auditable, and implemented to allow for oversight.

Second, fairness. Technology should not exacerbate inequality or reinforce bias and discrimination, but should promote inclusivity and be designed for the greater good.

Third, safety and security. AI technology should never cause foreseeable or unintentional harm. It should be reliable and resistant to compromise.

Fourth, responsibility. If something goes wrong as the result of a decision taken by an AI system, there needs to be clear accountability and, when applicable, mandatory remediation.

And last, privacy. There need to be mechanisms for the protection of peoples' rights, interests, and personal and private information. AI systems need to disclose how data is being used and allow anyone to readily revoke this right.

The earlier these principles can be harmonized and applied via common governance structures, the more likely we are to avoid the negative consequences of AI.

This is no small feat. It will not be easy. But it is imperative that we move beyond principles to explicit global commitments, agreements, and eventually international laws with consequences for violation.

Only through global cooperation can we ensure that nations and institutions everywhere choose peace and stability, inclusivity and safety, even when-especially when-the tremendous pressures of rapid and transformational technological change push them to do differently.

The views don't necessarily reflect those of China Daily.

The writer is chairman, CEO and co-founder of Blackstone, a global investment firm that invests capital on behalf of pension funds, large institutions and individuals.

Global Edition
BACK TO THE TOP
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349