Facebook, Google and Twitter told to do more to fight fake news ahead of European elections
A first batch of monthly progress reports from tech giants and advertising companies on what they’re doing to help fight online disinformation have been published by the European Commission.
Platforms including Facebook, Google and Twitter signed up to a voluntary EU code of practice on the issue last year.
The first reports cover measures taken by platforms up to December 31, 2018.
The implementation reports are intended to detail progress towards the goal of putting the squeeze on disinformation — such as by proactively identifying and removing fake accounts — but the European Commission has today called for tech firms to intensify their efforts, warning that more needs to be done in the run up to the 2019 European Parliament elections, which take place in May.
The Commission announced a multi-pronged action plan on disinformation two months ago, urging greater co-ordination on the issue between EU Member States and pushing for efforts to raise awareness and encourage critical thinking among the region’s people.
But it also heaped pressure on tech companies, especially, warning it wanted to see rapid action and progress.
A month on and it sounds less than impressed with tech giants’ ‘progress’ on the issue.
Mozilla also signed up to the voluntary Code of Practice, and all the signatories committed to take broad-brush action to try to combat disinformation.
Although, as we reported at the time, the code suffered from a failure to nail down terms and requirements — suggesting not only that measuring progress would be tricky but that progress itself might prove an elusive and slippery animal.
The first response certainly looks to be a mixed bag. Which is perhaps expected given the overarching difficulty of attacking a complex and multi-faceted problem like disinformation quickly.
Though there’s also little doubt that opaque platforms used to getting their own way with data and content are going to be dragged kicking and screaming towards greater transparency. Hence it suits their purpose to be able to produce multi-page chronicles of ‘steps taken’, which allows them to project an aura of action — while continuing to indulge in their preferred foot-drag.
The Guardian reports especially critical comments made by the Commission vis-a-vis Facebook’s response, for example — with Julian King saying at today’s press conference that the company still hasn’t given independent researchers access to its data.
“We need to do something about that,” he added.
Here’s the Commission’s brief rundown of what’s been done by tech firms but with emphasis firmly placed on what’s yet to be done:
Commenting in a statement, Mariya Gabriel, commissioner for digital economy and society, said: “Today’s reports rightly focus on urgent actions, such as taking down fake accounts. It is a good start. Now I expect the signatories to intensify their monitoring and reporting and increase their cooperation with fact-checkers and research community. We need to ensure our citizens’ access to quality and objective information allowing them to make informed choices.”
Strip out the diplomatic fillip and the message boils down to: Must do better, fast.
All of which explains why Facebook got out ahead of the Commission’s publication of the reports by putting its fresh-in-post European politician turned head of global comms, Nick Clegg, on a podium in Brussels yesterday — in an attempt to control the PR message about what it’s doing (or rather not doing, as the EC sees it) to boot fake activity into touch.
Clegg (re)announced more controls around the placement of political ads, and said Facebook would set up new human-staffed operations centers — in Dublin and Singapore — to monitor how localised political news is distributed on its network.
Although the centers won’t launch until March. So, again, not something Facebook has done.
The staged press event with Clegg making his maiden public speech for his new employer may have backfired a bit because he managed to be incredibly boring. Although making a hot button political issue as tedious as possible is probably a key Facebook strategy.
Anything to drain public outrage to make the real policymakers go away.
(The Commission’s brandished stick remains that if it doesn’t see enough voluntary progress from platforms, via the Code, is to say it could move towards regulating to tackle disinformation.)
Advertising groups are also signed up to the voluntary code. And the World Federation of Advertisers (WFA), European Association of Communication Agencies and Interactive Advertising Bureau Europe have also submitted reports today.
In its report, the WFA writes that the issue of disinformation has been incorporated into its Global Media Charter, which it says identifies “key issues within the digital advertising ecosystem”, as its members see it. It adds that the charter makes the following two obligation statements:
While the Code of Practice doesn’t contain a great deal of quantifiable substance, some have read its tea-leaves as a sign that signatories are committing to bot detection and identification — by promising to “establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions”.
But while Twitter has previously suggested it’s working on a system for badging bots on its platform (i.e. to help distinguish them from human users) nothing of the kind has yet seen the light of day as an actual Twitter feature. (The company is busy experimenting with other kinds of stuff.) So it looks like it also needs to provide more info on that front.
We reached out to the tech companies for comment on the Commission’s response to their implementation reports.
Google emailed us the following statement, attributed to Lie Junius, its director of public policy:
A Twitter spokesperson also told us:
At the time of writing Facebook had not responded to a request for comment.
via Twitter – TechCrunch https://techcrunch.com
January 29, 2019 at 05:13PM