In Meta’s latest gaslighting attempts, they have released their first annual human rights report which states that it aims to cover “insights and actions from our human rights due diligence on products, countries and responses to emerging crises.” Let’s examine a few things in the report (find all 83 pages here).
The report states, “Meta joined the GNI [Global Network Initiative] in 2013, recognizing how ‘advancing human rights, including freedom of expression and the right to communicate freely, is core to our mission’ and that by joining, we hoped to ‘shed a spotlight on government practices that threaten the economic, social and political benefits the internet provides’.”
If we look at Meta through the lens of their own basic survival, one can easily recognize that their survival depends on the internet. Without it there would be no Meta. Many other tech companies can agree that the internet is vital to their sustainability. However, it’s sheer folly to expect their views on “government practices that threaten… benefits the internet provides” to be without bias.
They dedicate a huge chunk of their report to a section about reforming government surveillance. If you look at the RGS Principles the first one listed is “1. Limiting Governments’ Authority to Collect Users’ Information”. While people would generally agree that this and the other principles listed are worthy of reformation worldwide, Meta successfully points away from themselves here.
When the question lurking in the shadows of this principle is, “What is Meta doing with the user data they have collected?”
Other holes in the report indicate their unwillingness to be fully transparent. For the sake of future clarity, the acronym HRIA in the report is short for, “human rights impact assessment.”
At least three different groups have been used to complete human rights impact assessments to date – Article One, BSR, and Foley Hoag LLP. They link to several countries that have a HRIA published in various places on their website, but go into more detail revolving around the Philippines and India. However, the way each of these are displayed is different, thus creating some confusion.
In the footnotes on page 57 under the page for the India Human Rights Impact Assessment, they state: Meta’s publication of this summary, and its response thereto, cannot be construed as admission, agreement with, or acceptance of any of the findings, conclusions, opinions or viewpoints identified by Foley Hoag, or the methodology that was employed to reach such findings, conclusions, opinions or viewpoints. Likewise, while Meta in its response references steps it has taken, or plans to take, which may correlate to points Foley Hoag raised or recommendations it made, these also cannot be deemed an admission, agreement with, or acceptance of any findings, conclusions, opinions or viewpoints.
In other words, Meta will not admit any fault.
Further, they note that steps they’ve taken as a result of the HRIA that appear to connect with Foley Hoag’s recommendations are also not “deemed an admission, agreement with, or acceptance of any findings, conclusions, opinions, or viewpoints.”
As further proof that the information in this report is misleading, on page 59 of the report they write, “The HRIA developed recommendations covering implementation and oversight; content moderation; and product interventions; and other areas.” The vague “other areas” are not shared beyond this mention.
Meta closes out this report with a quote, “Isaac Asimov once wrote, ‘The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom’.”
This suggests that Meta believes any blame for the role their platform plays in human rights violations is not their fault, it’s the user’s fault.
This report reads like a propagandized employee handbook that they can point to in order to say that they have taken action.
However, without fully knowing what the HRIA recommends and seeing the response Meta has taken from such an assessment, it is difficult to trust that they are taking steps to improve beyond what they, themselves deem necessary which may not be what the public would agree with.
A for-profit publicly traded company with the ability to choose which human rights impact assessments to act on (even though claiming their taking action on them is not an admission of guilt) in turn leads to the question of their motives. What is the motive of a for profit company? Money.
Should we trust a multibillion tech company to accurately self-evaluate their worldwide impact on human rights without bias? Maybe, just not this company, and not this self-congratulatory liability-releasing
press stunt report.