In a report examining the production of “fake news”, members of the digital, culture, media and sport committee concluded that regulation needs tightening, with the introduction of a compulsory code of ethics for tech companies and their use of data.
They also recommended the creation of a new independent regulator to rule on what may be considered harmful content and reforms that would ensure tech companies assume legal liability for such content.
The report looked at disinformation and “fake news” online, the privacy of social media users and the ability to affect their political choices online.
The investigation followed revelations that the data of millions of Facebook users and their “friends” had been passed to Cambridge Analytica by a third party, GSR, for use in political campaign management.
Facebook denied it knew about the passing of data—a breach of its third-party user terms—until it broke as a media story in March 2018.
The report said the breach should have been referred to chief executive Mark Zuckerberg immediately.
“The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited,” said the report. “It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018.
“The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.”
An ethical issue
Data management, security and privacy has become an increasingly fraught ethical issue for companies building business models on vast quantities of user data, with frequent media stories about data security breaches by hackers or the misuse of data by companies.
In figures published last month, the Institute of Business Ethics revealed a sharp increase in the number of “ethical lapses” involving information technology.
Fuelled by the introduction of the EU’s General Data Protection Regulation (GDPR), stories of problematic data breaches took IT from third to second place in the list of sectors with the greatest number of ethical lapses.
Data protection and privacy was the third most-reported ethical issue in news reporting. It was not among the top nine in 2017.
The IBE said in its report: “Such media headlines also made individuals question how their data is stored and used, which forced some companies to defend the way they manage the personal information of their customers and other stakeholders.”
Damian Collins, chair of the culture committee, said: “The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights.
“Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.
—Damian Collins, chair of the digital, culture, media and sport committee
“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission.
“We need a radical shift in the balance of power between the platforms and the people. The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”
The IBE’s findings along with the committee report will resonate with other tech and social media company boards concerned with the approach of formal regulation.
Earlier this month Instagram pledged to remove all images of “graphic self-harm” from the site following the suicide of teenager Molly Russell.
France and Germany have already introduced some legislation in relation to harmful content. Germany has the Network Enforcement Act, while French law allows judges to order the removal of content considered “disinformation”. Earlier this month the German information commissioner rebuked Facebook for its use of data.