Tech giants pressured to auto-flag “illegal” content in Europe
Social media giants have again been put on notice that they need to do more to speed up removals of hate speech and other illegal content from their platforms in the European Union.
The bloc’s executive body, the European Commission today announced a set of “guidelines and principles” aimed at pushing tech platforms to be more pro-active about takedowns of content deemed a problem. Specifically it’s urging they build tools to automate flagging and re-uploading of such content.
“The increasing availability and spreading of terrorist material and content that incites violence and hatred online is a serious threat to the security and safety of EU citizens,” it said in a press release, arguing that illegal hate speech also “undermines citizens’ trust and confidence in the digital environment” and can thus have a knock on impact on “innovation, growth and jobs”.
“Given their increasingly important role in providing access to information, the Commission expects online platforms to take swift action over the coming months, in particular in the area of terrorism and illegal hate speech — which is already illegal under EU law, both online and offline,” it added.
In a statement on the guidance, VP for the EU’s Digital Single Market, Andrus Ansip, described the plan as “a sound EU answer to the challenge of illegal content online”, and added: “We make it easier for platforms to fulfil their duty, in close cooperation with law enforcement and civil society. Our guidance includes safeguards to avoid over-removal and ensure transparency and the protection of fundamental rights such as freedom of speech.”
The move follows a voluntary Code of Conduct, unveiled by the Commission last year, with Facebook, Twitter, Google’s YouTube and Microsoft signed up to agree to remove illegal hate speech which breaches their community principles in less than 24 hours.
In a recent assessment of how that code is operating on hate speech takedowns the Commission said there had been some progress. But it’s still unhappy that a large portion (it now says ~28%) of takedowns are still taking as long as a week.
It said it will monitor progress over the next six months to decide whether to take additional measures — including the possibility of proposing legislative if it feels not enough is being done.
Its assessment (and possible legislative proposals) will be completed by May 2018. After which it would need to put any proposed new rules to the European Parliament for MEPs to vote on, as well as to the European Council. So it’s likely there would be challenges and amendments before a consensus could be reached.
Some individual EU member states have been pushing to go further than the EC’s voluntary code of conduct on illegal hate speech on online platforms. In April, for example, the German cabinet backed proposals to hit social media firms with fines of up to €50 million if they fail to promptly remove illegal content.
A committee of UK MPs also called for the government to consider similar moves earlier this year. While the UK prime minister has led a push by G7 nations to ramp up pressure on social media firms to expedite takedowns, especially of extremism content, in a bid to check the spread of terrorist propaganda online.
That drive goes even further than the current EC Code of Conduct — with a call for takedowns of extremist material to take place within two hours.
However the EC’s proposals today on tackling illegal content online appear to be attempting to pass guidance across a rather more expansive bundle of content, saying the aim is to “mainstream good procedural practices across different forms of illegal content” — so apparently seeking to roll hate speech, terrorist propaganda and child exploitation into the same “illegal” bundle as copyrighted content. Which makes for a far more controversial mix.
(The EC does explicitly state the measures are not intended to be applied in respect of “fake news”, noting this is “not necessary illegal”, so that’s one more online “problem” it’s not seeking to stuff into this bundle, adding: “The problem of fake news will be addressed separately.”)
It has divided its set of illegal content “guidelines and principles” into three areas — which it explains as follows:
Ergo, that’s a whole lot of “automatic tools” the Commission is proposing that commercial tech giants build to block the uploading of a poorly defined bundle of “illegal content”.
Given the mix of vague guidance and expansive aims — to apparently apply the same and/or similar measures to tackle issues as different as terrorist propaganda and copyright — the guidelines have unsurprisingly drawn swift criticism.
MEP Jan Philip Albrecht, for example, couched them as “vague requests”, and described the approach as “neither effective” in its aim of regulating tech platforms nor “in line with rule of law principles”.
He’s not the only European politician with that criticism, either. Other MEPs have warned the guidance is a “step backwards” for the rule of law online — seizing specifically on the Commission’s call for “automatic tools” to prevent illegal content being re-uploaded as a move towards upload-filters (which is something the executive has been pushing for as part of its controversial plan to reform the bloc’s digital copyright rules).
“Installing censorship infrastructure that surveils everything people upload and letting algorithms make judgement calls about what we all can and cannot say online is an attack on our fundamental rights,” writes MEP Julia Redia in another response condemning the Commission’s plan. She then goes on to list a series of examples where algorithmic filtering failed…
While MEP Marietje Schaake blogged with a warning about making companies “the arbiters of limitations of our fundamental rights”. “Unfortunately the good parts on enhancing transparency and accountability for the removal of illegal content are completely overshadowed by the parts that encourage automated measures by online platforms,” she added.
European digital rights group the EDRI, which campaigns for free speech across the region, is also eviscerating in its response to the guidance, arguing that: “The document puts virtually all its focus on Internet companies monitoring online communications, in order to remove content that they decide might be illegal. It presents few safeguards for free speech, and little concern for dealing with content that is actually criminal.”
“The Commission makes no effort at all to reflect on whether the content being deleted is actually illegal, nor if the impact is counterproductive. The speed and proportion of removals is praised simply due to the number of takedowns,” it adds, concluding that: “The Commission’s approach of fully privatising freedom of expression online, its almost complete indifference diligent assessment of the impacts of this privatisation.”
via Twitter – TechCrunch https://techcrunch.com
September 28, 2017 at 10:40AM