AI tech learning to learn in Xi'an
As artificial intelligence is no longer a wild figment of the imagination or the stuff of science fiction, research into AI is becoming increasingly appealing to scientists.
For Zheng Nanning, director of the Institute of Artificial Intelligence and Robotics at Xi'an Jiaotong University, AI is the study of how to make machines "think" and act like human beings.
One of the primary frontiers of research in the field is how to develop AI that is able to process non-complete information, said Zheng, who is also an academician at the Chinese Academy of Engineering.
"'Robustness' refers to the AI's adaptability to users' errors, target deviation, error model and even non-modeling objects," he said.
Zheng, also a former president of Xi'an Jiaotong University from 2003 to 2014, started to establish the AI discipline in the school in 1985 after completing his studies in Japan.
"I went to Japan in 1981, when there was an upsurge in the development of artificial intelligence," Zheng recalled. "I was impressed by the progress in computer science and its widespread use in social life in the country."
After he returned to his homeland in 1985, China decided to develop AI technology as a priority in its seventh Five-Year Plan (1986-90). Zheng and Xuan Guorong, the first director of the institute, co-founded the Institute of Artificial Intelligence and Robotics in 1986.
As the central government prioritized the development of information and computer science, Zheng's institute has embraced many opportunities for growth over the past 30 years - cognitive computing and computer vision.
In AI development, one of the major challenges facing scientists is how to enable machines to learn without help from human teachers, Zheng said. "Much of human learning is a logical reasoning process of mastering something new based on their previous knowledge," he said. "In contrast, no current AI systems have such an ability."
Every time an AI machine learns a new skill, it basically has to learn from the very beginning, which requires the participation of human beings to a large extent, he explained.
"To achieve human intelligence, the machine needs to have the ability to learn without human supervision and instruction, using previous knowledge to make richer inferences from a very small amount of training data."
Another challenge is to make machines perceive and understand the world as humans do, the senior scientist said.
"Perception is a key part of intelligence," he said. "If machines can be made to perceive and understand the world as humans do, it will prove that the chronic problems of planning and reasoning in AI research can be solved."
"While we are good at data collection and algorithm research and development, and it is no longer an obstacle in AI development to use machines to analyze collected data, such reasoning capabilities are relied on data, which indicates that there's still a long way to go before AI can perceive the real world," Zheng said.
It is a tough task to make machines understand and depict natural behavior, he noted.
AlphaGo, a Google-developed AI system, defeated Lee Se-dol, a South Korean professional Go player, in a series of games that hit headlines worldwide in March.
It is difficult to anticipate the most advantageous positions to move in the complicated Go games, yet it is much easier than capturing an accurate depiction of the sophisticated world, Zheng said/t will take decades or even longer to close the gap of such understanding between machines and human beings."
The toughest challenge in realizing human-like intelligence is to enable machines to have self-awareness, emotions and the ability to reflect on their own situation and behaviors, Zheng said.
But human beings also stand to benefit from development of AI technology. "Human cerebral cortex capacity is limited in physics," he said. "If intelligent machines can be linked to human brains, it will not only enhance human capacity, but also allow machines to be inspired."
It is a fascinating exploration into seeking self-awareness, emotions and reflective ability in machines, to scientists and philosophers alike, Zheng said.
However, AI research can be a "double-edged sword", he noted.
"We need to ensure that increasingly powerful AI systems remain completely under human control and watch out for AI's negative impact on human society and pay attention to the profound ethical issues brought up by its development," he said. "We need the AI that helps human beings rather than taking their place."
Contact the writers through malie@chinadaily.com.cn
A team from the Institute of Artificial Intelligence and Robotics at Xi’an Jiaotong University participates in an intelligent vehicle contest in Xi’an, Shaanxi province. Provided To China Daily |
(China Daily 09/23/2016 page16)