As regulators around the world demand more information on the impact companies have on human rights, Meta, the home of Facebook and WhatsApp, published its first annual human rights report.
As you might expect, the report attracted its share of criticism but, as an exercise in self-exposure, it has to be one of the more interesting corporate actions of recent times.
The business, controversial even at the best of times, acknowledges what many detractors already think. “We release this report with humility,” says Iain Levine, product policy manager for human rights at Meta, in a tweet, “and acknowledge there is much more work to be done. But we are committed to fulfilling our human rights responsibilities.”
Miranda Sissons, Meta’s director of human rights, tweeted a thanks, adding: “Doing new things is hard.”
She is undoubtedly right. Across 83 pages, Meta outlines the key risks to human rights that could be present in its work; the action it is taking to mitigate the risk; and, creditably, points to some of the problems it has already found.
Clear and present dangers
The main risks caused by Meta’s products, and the ones acknowledged by the company, are, as observers might expect: a right to freedom of opinion and expression; privacy; life, liberty and security of person; equality and non-discrimination; best interests of the child; and participation in elections. Meta’s platforms could be used to undermine any one of those.
And Meta comes clean about the way these rights can be undermined. A special report, looking at Meta platforms in India, found it could be helping third parties to advocate “hatred that incites hostility, discrimination or violence”.
The report, by law firm Foley Hoag, concludes that Meta faces “criticism and potential reputation risks related to risks of hateful or discriminatory speech by end users”.
This is nothing any user on Meta’s platforms in those jurisdictions could not have told Meta. But it’s important to have the experts look at the issue and come clean about the problem.
Meta did likewise with a review of operations in the Philippines, finding that the company could do good in offering a “voice to people” and in being “essential” for monitoring and defending human rights during the pandemic.
Dark web
But there was an unsettling list of problems, too. The report “also highlighted salient human rights risks, including concerns about the misuse of our technologies for misinformation and disinformation, online harassment, incitement of violence, surveillance, online sexual exploitation, human organs trafficking and extremist activities.” That is a disturbing list of worries.
Of course, the reviews also contained recommendations for both India and the Philippines. Meta appears to know what it should be doing, even if observers worry the company has failed to publicly commit itself to full implementation.
With that in mind, it is unsurprising to find Meta’s report has not been universally welcomed. Critics point out that Meta published only a summary of its India report and, as mentioned, did not make a public commitment to Foley Hoag’s recommendations.
Elsewhere, self-appointed watchdog the Real Facebook Oversight Board, called the India report “inadequate” and “whitewashing” of religious violence across India that it alleges is “fomented” on Meta platforms.
“Meta/Facebook always puts PR first,” the group says, calling the human rights report a “masterclass in spin and obfuscation”.
These concerns are echoed elsewhere. Gayatri Khandhadai, of the Business & Human Rights Resource Centre, tells Board Agenda the report failed to provide “tangible information or commitments”. Meta’s report is a “welcome step” but there is much more to be done on transparency and accountability in the sector, she adds.
“As governments, courts and the wider public have increasingly been taking these companies to task through regulations, penalties, pointed observations and campaigns, it is only a matter of time before they are legally required to produce public due diligence reports that meet the standards prescribed. Facebook and Meta must engage with civil society and affected communities on the shortcomings of the report and publish one that meets their international obligations.”
Above the parapet
That highlights an outcome companies must face when taking the same route as Meta: issue a report and it invites scrutiny. And likely more disclosures. The regulatory environment for business and human rights is ratcheting up. The EU’s Corporate Sustainability Due Diligence Directive, now has the go-ahead forcing European companies to check on human rights in their supply chains.
This month Subodh Mishra, global head of communications for ISS, the proxy advisers, writes for the Harvard governance blog that the human rights landscape is “changing rapidly” from soft to hard law, with “momentum toward mandatory due diligence”.
“Companies and also increasingly investors,” he writes, “are subject to regulation that is expanding in its scope and enforcement and that requires identification, mitigation, remediation, and disclosure of adverse human rights impacts.”
In May, the World Economic Forum in Davos offered boards guidelines for managing human rights. Launching the guide, co-authors John Morrison, chief executive of the Institute for Human Rights and Business, and Julia Olofsson, head of human rights at Ingka (IKEA) Group, said: “Whatever the reason, it is critical that companies have the right mechanisms in place to ensure that affected stakeholders are at the centre of every analysis and decision and that the voices of the most vulnerable are heard all the way up into the boardroom itself.”
While Meta is a US company and therefore reporting human rights voluntarily (though clearly under public pressure), the global agenda is heightened with reporting demands about to tighten in many jurisdictions, including some of Meta’s biggest markets. Meta’s report is welcome but, as key figures freely admit, the company can only expect to do more.