UK hopes to tame cyberspace wilderness
China Daily Global | Updated: 2019-04-16 09:32
Regulation of online content in the United Kingdom is a hot topic, with pressure slowly building for several years now to place greater control on social media corporations and to make them more accountable.
The recent release of the UK government's white paper on online harms may be a welcome change for Britons who have spent the past few months stuck in a Brexit-dominated rut.
In the past few months, the situation has exploded into a mainstream political issue. Among other tragic incidents, the death of 14-year-old schoolgirl Molly Russell shocked the nation and immediately put social media giant Facebook into the spotlight. Russell's suicide was attributed several months ago to looking at online material that encouraged self-harm and depressive behavior, material that was apparently readily available on Facebook subsidiary Instagram, a photo-and video-sharing platform that Molly spent hours watching.
The Online Harms White Paper outlines government plans to place the UK at the front of cyber-safety regulation, while also allowing enough freedom for innovation to still occur in a digital economy without unnecessary constraints.
The paper, a combination of hard legislation and soft power measures, aims to ensure that social media and internet companies become more responsible for their users' safety online. This is especially true for vulnerable demographics such as young children. Social network platforms such as Facebook and Twitter in the UK are now considered to have a new legal "duty of care", and will be responsible for content submitted by other users. Fines will also be imposed on those who are too slow in taking down questionable photos and videos.
This has long been a difficult issue facing technology corporations, as the rate at which dangerous content can be recognized and censored by human staff is vastly outstripped by the vast ocean of videos uploaded every single hour online around the world. Facebook, for example, used to rely on a handful of overwhelmed staff members to wade through videos and manually censor them. Features also existed that made it easier for users to report any suspicious activity.
Partly due to pressure from authorities, such companies are now investing more in artificial intelligence solutions integrated into chatbots and messenger apps to recognize worrisome behavior and content. These tools have been developed alongside suicide prevention and mental health organizations within the UK.
Britain's attitude toward online regulation has become tougher. With Brexit looming on the horizon, the UK's online safety policy is now broader than European Union regulation(which does not cover online mental health issues as extensively nor enforce liability as forcefully), and stricter than that of the United States.
Challenges still exist for the UK. Classifying what counts as harmful in an increasingly complex world is a difficult task. Examples such as terrorist content will lead to little resistance. However, mental health forums and pages that act as a safe space to discuss issues openly but with good intentions may be more difficult. There is a thin line between mutual support and vulnerable individuals being given questionable advice or encouragement.
However, the strong approach that the UK government has taken is in line with the level of action needed to tame the growing cyber wilderness. The UK government has stated:"The regulator will not compel companies to undertake general monitoring of all communications on their online services, as this would be a disproportionate burden on companies and would raise concerns about user privacy."
However, it says, there is "a strong case for mandating specific monitoring that targets where there is a threat to national security or the physical safety of children".
In UK cyberspace, balance is key.