The rise of ChatGPT prompts call for AI regulation
By Calvin Tang | chinadaily.com.cn | Updated: 2023-02-17 14:45
In recent decades, science fiction movies such as The Terminator (1984), The Matrix (1999) and I, Robot (2004) have explored potential dangers posed by artificial intelligence (AI), and reached almost the same conclusion that, without proper control and oversight, advanced technologies could pose a threat to humankind by seeking to exercise control over humans or even eliminate them.
This theme of science fiction seems closer to reality with the recent launch of ChatGPT, an AI language model developed by OpenAI.
With its remarkable ability to generate human-like responses to a wide range of questions and prompts, ChatGPT has left the world in awe of the boundless potential of AI. However, at the same time, it sheds light on the crucial need to analyze the ethical and societal impacts of AI, because without strict regulations on practices, one could see the following challenges in the near future.
Plagiarism: A recent Forbes survey showed an overwhelming 89 percent of student respondents confessed to using OpenAI’s platform for assistance to complete their assignments. With machine learning algorithms, students and professionals are able to generate texts in various styles and to suit different requirements. The problem lies in the fact that the written output created by AI can be easily confused with original deliverables, leading to a rise in plagiarism and concerns about academic integrity.
Misrepresentation: The use of AI language models such as ChatGPT carries the risk of identity deception through the generation of fake identities. Their sophisticated language abilities allow chatbots to produce texts that resemble those written by a real person, facilitating impersonation.
This poses a number of serious problems, such as the dissemination of false information, tarnishing of reputation, even criminal activities. For instance, a person with malicious intent could use ChatGPT to create fraudulent news articles, social media posts, or emails that appear to originate from a trustworthy source. This underscores the importance of having in place regulations and technologies that can detect and prevent the misuse of AI for identity deception.
Privacy violations: The digital agent can also gather and retain huge amounts of personal information, including addresses, dates of birth, phone numbers and financial information that can be used for malicious purposes, including phishing scams, identity theft or other types of cybercrime. The information can also be sold to third-party companies for targeted advertising or other purposes. And the increasing use of chatbots could compromise database security, similar to, if not more serious than, the leakage of information of 3 billion Yahoo accounts in 2013 and the Equifax data breach in 2017.
Undetermined accountability: The lack of clarity surrounding the legal responsibilities for the actions and outcomes of chatbots is a rising concern. These AI systems — used to answer questions — are often operated by multiple parties including the developer, platform provider and even the user. Although their purpose is to provide accurate information and correct instructions, mistakes can still occur. In such cases, it can be difficult to determine who should be held responsible for these errors and who should be held accountable for the consequences.
Bias and discrimination: In 2016, Tay, a chatbot developed by Microsoft to learn from interactions with users on Twitter, was found exhibiting sexist and racist biases within 24 hours of its launch. It began to tweet offensive and inflammatory messages, after being repeatedly fed with biased and hateful information.
Another study in 2018 found that the widely used language model, GPT-2, was biased in favour of gender stereotypes. For example, it was more likely to associate male pronouns with professionals like engineers and doctors. These incidents highlight the dangers of biased data and algorithms in chatbots, and the potential for these technologies to perpetuate harmful stereotypes and increase discrimination.
Given these risks, regulators need to take measures to reduce, if not fully prevent, the problems associated with the rise of chatbots. Whether it’s through legislative action, industry self-regulation, or a combination of the two, the time to act is now, before these risks become irreversible.
Transparency is key to reducing the risks associated with chatbots. As these systems become more sophisticated and widespread, it’s critical that users have a clear understanding of how they work, what data they collect, and who is responsible for their actions. By increasing transparency and providing clear and accessible information about chatbots, the authorities can help users make informed decisions about the use of such technologies and minimize the risks associated with them.
Watermarking AI-generated text is another important step toward reducing the risks associated with chatbots. By adding a unique identifier or watermark to the text generated by these systems, it becomes easier to track and attribute the source of the content, reducing the risk of plagiarism and intellectual property theft. This also helps increase accountability and reduces the risk of misinformation and misrepresentation.
Finally, creating a comprehensive legal framework to regulate the development of chatbots is a complex task, but it is essential to addressing the potential risks associated with these technologies. The framework must set standards for data privacy and security, ensure transparency and accountability, and offer a solution for resolving disputes. By establishing a robust legal framework, coupled with proper user education, regulators can help to mitigate the risks of chatbots and foster their responsible use.
The rise of AI language models such as ChatGPT has brought us closer to a world once imagined only in science fiction. The potential of these technologies is immense and inspiring, but it is important to consider the risks associated with their use. As Immanuel Kant said, “Enlightenment is man’s emergence from his self-imposed nonage.” Embracing the advanced technologies is important, but it is equally important to exercise caution and use wisdom in shaping its impact on our society and the future.
The author is a member of China Retold, and Beta Gamma Sigma, an international business honor society.
The views don’t necessarily reflect those of China Daily.