xi's moments
Home | Op-Ed Contributors

Should AI write year-end summaries?

By ZHANG XI | China Daily | Updated: 2023-12-23 08:57

[Photo/IC]

It is December again — that time of the year when people write their year-end work summary, mandatory in many Chinese companies. It is standard practice for human resources departments at many enterprises to ask workers to make a self-assessment of their performance over the past 12 months. Such reports help assess employees while deciding whether and what pay hikes and bonus they will receive. Though a lot hinges on that summary, some employees might find it easier to write it this year, as many are turning to artificial intelligence to provide a helping hand.

Reports show that online searches for "year-end work summary and AI" have surged over the past few weeks. Actually, over the past year, a large number of people around the world have been using AI to write essays, create resumes and presentations and many other kinds of files.

The development of AI technology should be appreciated, because it is a crystallization of human creativity. However, it may not be a good idea to write year-end summaries using AI. No doubt, it saves time, but it will turn one's highly personalized assessment into a formal one. That is not a good thing considering that it has a bearing on one's career development.

Writing a year-end work summary is supposed to be an honest review of one's gains and losses over the period. This is an opportunity for one to sum up one's experience, identify shortcomings and plan for the future.

During the writing process, AI can be used to enrich one's vocabulary, thus improving the quality of content. However, it is only a tool and cannot become the "subject" itself. Otherwise, it is akin to putting the cart before the horse.

It is also worth noting that some enterprises only focus on the paperwork rather than the ability of employees. This is a loophole that employees exploit when they turn to AI to generate a flowery year-end work summary. Both employers and employees should understand that the year-end work summaries, PPTs and any other work documents should, instead, reflect the writers' original ideas.

Actually, whether it is acceptable to write summaries or essays using AI is just the tip of the iceberg on discussions concerning use of the technology. It is more important to define the rules and boundaries governing AI products. In other words, we have to clarify where and when the technology should be applied because the wave of AI cannot be stopped.

From the early days of AI development there have been arguments whether computers should be employed for certain tasks just because they can do them, given the difference between computers and humans, and between quantitative calculation and qualitative, value-based judgement.

For example, the risk of being rendered redundant from AI has been highlighted for a long time. Technology has long been the giver and taker of jobs. In the past, it has helped create more jobs. Current businesses plan to train new employees and existing staff members by using AI and other automated systems. The fear among people that AI could replace them might still hold merit. But what we are seeing at the moment is a deepening of the partnership between humans and digital technology.

In a word, AI can kill jobs, but it will also create more in the future. The question is whether society can have enough talents with the right skill set to fill the new requirements. Hence, it is imperative for authorities and the public to spread awareness and continuously educate themselves.

Another concern with AI is cybersecurity, such as misuse of data and algorithmic discrimination against specific groups or individuals during the development and use of generative AI. As the development of AI technology becomes more prevalent, it is essential to strengthen the study of legal and ethical issues concerning AI.

China has released several laws and regulations to keep cyberspace and data safe. The country is also working to establish relevant guidelines, norms and accountability mechanisms to ensure that AI technology is developed within ethical boundaries.

Moreover, in October, China launched the Global Artificial Intelligence Governance Initiative, to address universal concerns over AI development and governance. It even drew up blueprints for relevant international discussions and rule-making on the subject.

There is widespread consensus that AI development should be human-centric and should progress in a way that it benefits human civilization. The only choice before the world is to consider how to use it to empower humans and better serve society, rather than shut the door on it because of the uncertainties.

After all, the future of AI is bright, and with the right approach, people can benefit from the advancement in AI technology while also tackling its challenges.

The author is a writer with China Daily.

If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.

Global Edition
BACK TO THE TOP
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349