xi's moments
Home | Kaleidoscope

How can academia combat ChatGPT?

By Barry He | China Daily Global | Updated: 2023-02-17 21:55

OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023. [Photo/Agencies]

Ask the artificial intelligence tool ChatGPT to write about nuclear fusion or the history of the French Revolution, and it will provide a succinct and flawless essay in less than a second. Arguably the most disruptive tool to hit the education sector in decades, alarm bells have started to ring in schools and universities around the world. ChatGPT is based on large language models. With such a powerful tool openly accessible to any student for free, the race is on to assess students fairly without their reliance on the technology.

AI based plagiarism or so called "AIgiarism", is fast taking off across student populations internationally. The text based generator was launched to the general public back in early December, and can offer everything from answers to basic general knowledge questions, to writing coherent poems in natural English.

The San Francisco based developers OpenAI have since vowed to clamp down on the abuse of its technology. The company is developing a system for identifying those who cheat by submitting essays written by ChatGPT. By using watermarks which statistically subtly tweak certain works into a recognisable pattern, the machine generated text could be detected by anyone looking for the signs.

These subtle changes would not be noticeable to the reader, however with training the researchers are confident that a system to detect cheating is possible. This however does not stop ChatGPT being used as a research tool where students can obtain information and then write out their work in their own words. By setting up this kind of security system, not only does it hold academia to be accountable, but it could also prevent the technology being abused by impersonating someone else's writing style for whatever reason, say a novel or journalism article.

Such measures will also prevent an over reliance on the bot, which despite being able to write convincing prose, still has a while to go before the accuracy of facts it spouts improves. The bot is still capable of making mistakes, due to the fact that it pulls the information from all over the internet. The fact that ChatGPT produces information so confidently and quickly could ironically lead to false confidence and reliance on its use, with OpenAI CEO Sam Altman saying in a statement that it would be a mistake to rely on it for anything important at this stage.

Schools in turn have tried to take matters into their own hands, with many across Europe and the US updating their policies to warn of penalties for those who are caught using the program. Other schools have deliberately restricted internet network access for ChatGPT on campus, however this does not stop the students accessing the technology at home on their own devices. Increasingly however, examiners are looking for tell tale signs of machine written prose. By looking out for a lack of emotion or personal experiences in the writing, humans inspecting the writing can develop a sense for whether an essay is genuine or not. This ability to distinguish non AI plagiarised work through gut feeling, is a human advantage AI will struggle to impersonate, for now anyway.

Barry He is a London-based columnist for China Daily.

Global Edition
BACK TO THE TOP
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349