UK parliament calls for antitrust, data abuse probe of Facebook
A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.
In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.
In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.
Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.
Interrogating the distribution of ‘fake news’
The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to users’ data to developers and advertisers in order to increase revenue and/or usage of its own platform; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.
The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.
“Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. “This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.
“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”
“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.
We’ve reached out to Facebook for comment on the committee’s report. Update: Facebook said it rejects all claims it breached data protection and competition laws.
In a statement attributed to UK public policy manager, Karim Palant, the company told us:
We share the Committee’s concerns about false news and election integrity and are pleased to have made a significant contribution to their investigation over the past 18 months, answering more than 700 questions and with four of our most senior executives giving evidence.
We are open to meaningful regulation and support the committee’s recommendation for electoral law reform. But we’re not waiting. We have already made substantial changes so that every political ad on Facebook has to be authorised, state who is paying for it and then is stored in a searchable archive for 7 years. No other channel for political advertising is as transparent and offers the tools that we do.
We also support effective privacy legislation that holds companies to high standards in their use of data and transparency for users.
While we still have more to do, we are not the same company we were a year ago. We have tripled the size of the team working to detect and protect users from bad content to 30,000 people and invested heavily in machine learning, artificial intelligence and computer vision technology to help prevent this type of abuse.
Last fall Facebook was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although it is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.
During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.
Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.
Among the report’s main recommendations are:
- clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
- privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
- a levy on tech companies operating in the UK to support enhanced regulation of such platforms
- a call for the ICO to investigate Facebook’s platform practices and use of user data
- a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
- changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
- a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
- a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users
Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.
It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.
“Protecting our data helps us secure the past, but protecting inferences and uses of Artificial Intelligence (AI) is what we will need to protect our future,” the committee warns.
Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.
“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” says the committee. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”
The report calls for tech companies to be regulated as a new category, “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.
Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18. The government said then that it has not ruled out doing so.
We’ve reached out to the DCMS for a response to the latest committee report. Update: A department spokesperson told us:
The Government’s forthcoming White Paper on Online Harms will set out a new framework for ensuring disinformation is tackled effectively, while respecting freedom of expression and promoting innovation.
This week the Culture Secretary will travel to the United States to meet with tech giants including Google, Facebook, Twitter and Apple to discuss many of these issues.
We welcome this report’s contribution towards our work to tackle the increasing threat of disinformation and to make the UK the safest place to be online. We will respond in due course.
Source: https://techcrunch.com
Previous Story
- The Tortured Case for Deleting Instagram
- PRIVATE MESSAGES ARE THE NEW (OLD) SOCIAL NETWORK
- Instagram 'Implicated' In Teen Suicides As It Prepares...
- Facebook Wants To Combine Messenger, Instagram, And WhatsApp:...
- The pitfalls of facebook mergfing messenger, instagram, and...
- Facebook removes accounts related to inauthentic behaviour in...
- Why Facebook Hasn’t Released The ‘Clear History’ Feature...
- Facebook Launches New Database That Shows US Political...