Google plans to make it harder for terrorists to exploit its platform by introducing new measures that are designed to help the search giant remove extremist content quicker and more efficiently.
The measures — announced in The Financial Times (FT) on Sunday — come after the UK government raised questions about whether social media platforms have become breeding grounds and safe havens for terrorists.
Google counsel Kent Walker wrote in the FT that Google already has thousands of people around the world reviewing content, in addition to image-matching technology that prevents videos from being re-uploaded once they have already been removed.
But Walker admitted that Google and others in the tech industry need to do more to combat the rise of extremism. “While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done,” he wrote. “Now.”
In a bid to address the problem, Walker said that Google intends to:
- put more engineering resources into training software that uses artificial intelligence to identify videos promoting extremism — “We have used video analysis models to find and assess more than 50% of the terrorism-related content we have removed over the past six months. We will now devote more engineering resources to apply our most advanced machine learning research to train new “content classifiers” to help us more quickly identify and remove such content.”
- hire more people to flag inappropriate YouTube videos — “Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech.”
- take a tougher stance on videos that do not clearly violate YouTube’s policies — “Videos that contain inflammatory religious or supremacist content; in future these will appear behind a warning and will not be monetised, recommended or eligible for comments or user endorsements.”
- increase the number of independent experts in YouTube’s Trusted Flagger programme — “YouTube will expand its role in counter-radicalisation efforts.”
The Google column in the FT comes days after Facebook published a blog post detailing the various efforts it was making to try and tackle terrorism.
Google and Facebook have come under increasing levels of scrutiny in the wake of recent terror attacks in the UK and Europe.
UK Prime Minister Theresa May and French Prime Minister Emmanuel Macron have taken a particularly hard stance, launching a joint campaign last week that could see them create new laws to punish tech firms if they fail to remove certain types of content.
“We cannot allow this ideology the safe space it needs to breed,” May said after the London Bridge terror attack. “Yet that is precisely what the internet — and the big companies that provide internet-based services — provide.”
Digital campaigners are concerned that governments will end up stifling free speech and freedom of expression as they look to crack down on large tech platforms.
Jim Killock, executive director of Open Rights Group, said in a statement on the organisation’s website: “Theresa May could push these vile networks into even darker corners of the web, where they will be even harder to observe.
Killock added: “But we should not be distracted: the Internet and companies like Facebook are not a cause of this hatred and violence, but tools that can be abused. While governments and companies should take sensible measures to stop abuse, attempts to control the Internet is not the simple solution that Theresa May is claiming.”