Tech Companies Pressured to do More to Combat Extremism
The recent terrorist attack in Westminster has led to renewed calls from UK politicians for technology companies such as Facebook, Google and Twitter to do more to stop extremists using their services to set up online platforms.
UK Home Secretary Amber Rudd specifically cited the example of WhatsApp, as this was the technology that the Westminster attacker used just minutes before launching his attack, which left five dead and 50 injured.
The minister’s criticism is just the latest example of senior politicians and others expressing their frustration, both at the apparent inability of tech companies to prevent their products from being used by extremists, and their perceived reluctance to allow law enforcement officials to gain access to their software as part of ongoing investigations.
Having previously criticised Blackberry and Twitter for the way in which their products were used during the London riots of 2011, last year the UK Parliament passed the Investigative Powers Act, which contains provisions that effectively demand privileged access to encrypted data for law enforcement agencies. The EU is also expected to push for similar police access from Google, Facebook and Twitter when it proposes new rules in June.
Despite their size, technological giants such as Facebook, Google and Twitter are not immune to political pressure, particularly when that pressure is applied simultaneously in several countries, and are attempting to meet the concerns of government. They have already agreed with the European Union to set up a shared database to help one another to identify illegal, inflammatory or extremist content, rather than relying on users to flag the content.
In the wake of the UK attack, Google, Twitter, Facebook and Microsoft put out a statement promising to create tools to remove such material, and while there is unlikely to be an easy technical solution, there are some steps that the companies can take. The most obvious is a drastic expansion in the size of the teams dedicated to reviewing and removing suspicious content within the companies. That has already happened at Twitter, which was reported to have suspended 235,000 accounts in the first eight months of 2016.
Some companies are also working with other organisations to fight extremist content. One example of this collaboration is Jigsaw’s YouTube-based counter-messaging program that targets potential ISIS sympathisers and recruits and another is Facebook’s co-operation with a US State Department anti-ISIS messaging campaign.
The question of police access to encrypted systems is a more difficult one. This requires effectively creating back doors, which could eventually be exploited by cyber criminals, potentially making the system less safe for all users. That is why, although they have shown a willingness to co-operate on measures to target and take down extremist content, tech companies are likely to continue to drag their feet when it comes to such access.