xi's moments
Home | Europe

Deep fake videos of UK PM trigger alarm bells over detection methods

By Barry He | chinadaily.com.cn | Updated: 2024-02-19 02:34

British Prime Minister Rishi Sunak looks on while visiting a police station during a media visit, in Harlow, Britain, Feb 16, 2024. [Photo/Agencies]

Deep fake videos circulating in recent months have shown British Prime Minister Rishi Sunak advertising a savings app on social media. Nefarious editing also made it appear as if the financial product was "endorsed by Elon Musk", and this has triggered alarm bells among experts who worry that deep fake videos will soon outstrip conventional detection methods and make them impossible to authenticate.

The adverts were circulated on Facebook, reaching 400,000 people. It is the first time Sunak has had his image manipulated and spread on such a mass scale, fueling concerns on how disinformation may affect the upcoming UK election.

In recent years, software that clones faces and voices, allowing users to manipulate them however they want, has become increasingly accessible.

These apps and programs have seen significant improvements in quality and can be quickly produced with little to no technical knowledge.

The circulation of videos containing malicious content and disinformation is against Facebook's policy; however, research carried out by the communications company Fenimore Harper stated that only a small minority of videos were detected and removed from the platform.

While software to analyze the metadata of deep fake videos and identify suspicious content does exist, the time it takes to detect these videos often allows them to circulate online and cause significant damage.

If a video goes viral, hundreds of thousands can be exposed to disinformation in a matter of hours.

Viewers need to be educated on tell tale signs of deep fake content. Particular attention should be paid to the face, as this is nearly always where the majority of the transformation is carried out. It is important to pay close attention to areas like the cheeks and forehead during inspection. The skin in these areas may appear a little too smooth or wrinkly, and may contrast with the age showing in the hair and eyes. The inconsistency of some deep fakes means that savvy viewers may still be able to spot them with the right education.

Eyes and eyebrows are other areas of fine motor coordination that can be difficult to replicate realistically under close inspection. Similarly, shadows often exhibit visible digital artifacts if they are not authentic as deep fake technologies have not yet mastered the realistic reproduction of the natural physics of light.

Detecting and preventing deep fake content from being uploaded in the first place will require sophisticated technology, and it is probable that detectors and fakers will be locked in a technological arms race trying to outdo each other for some time into the future.

New techniques can analyse blood flow in the face in a technique called photoplethysmography. This is when software can recognize patterns of colour in facial regions caused by blood flow and notice unnatural abnormalities that deviate from these natural patterns. This is more reliable than software that looks at the hair or lips, which can be easily manipulated, but blood flow and skin tone offers clues all over the face and body.

Other methods look at the motion of the face to compare the biomechanics to natural body language. While such software boasts a current detection rate of 97 percent, it is possible that this 3 percent margin of error will grow as deep fakes advance.

Aside from the visual aspect, spotting deep fake audio presents another challenge in itself. Deep fake audio has a higher chance of fooling unsuspecting viewers as distortions in audio quality can reasonably be put down to the quality of digital media instead of questioning its authenticity.

Researchers at the University of Florida have developed a system that analyzes distinct cross-section traits found in both real and fake recordings, comparing this data to test samples. The system was able to recognize fake audio in around 99 percent of cases.

Protective measures against misinformation are in constant development. Collaboration is still required with big tech companies such as Meta to ensure that these tools are integrated with platforms and ready to go as a first line of defence. Deep fakes of individuals can have a devastating personal cost to those imitated, ruining reputations and humiliating those in question. Such disinformation can also have a serious knock-on effect globally in areas such as business or geopolitics. Detection technology must remain one step ahead of the fakes, and the proactive innovation of defences should never falter.

Global Edition
BACK TO THE TOP
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349