Guide: how to protect human rights in content governance

Leer en español

Governments and companies are under increasing pressure to address illegal or undesirable content and expression online, but hasty or poorly crafted solutions can threaten human rights.

Laws, policies, and content moderation practices that hurt free expression have a disparate impact on those most at risk of human rights violations, including journalists, activists, human rights defenders, and members of oppressed or marginalized groups, such as women, religious or ethnic minority groups, people of color, and the LGBTQ community.

Companies make decisions about whether content is removed or amplified on their platforms, and follow their own rules, often in ways that are not transparent. Their actions — or lack of action — regarding content can cause or contribute to societal harm. Their role in spreading hate speech, disinformation, and illegal content, as well as facilitating discrimination, under a profit motive, is concerning.

At the same time, governments seeking to control the flow of information online have cited these very issues as a rationale for inherently blunt and disproportionate measures such as internet shutdowns. In some cases, state actors have also sought to deputize private companies to police expression using methods that are automated, opaque, lack remedy, and otherwise fail to align with international human rights principles. These “solutions” can be leveraged to silence entire communities, and can lead to or hide atrocities.

Notably, Access Now’s Digital Security Helpline, which works with individuals and organizations globally to keep them safe online, has seen an increase in cases related to content moderation decisions that affect users at risk. In 2019, approximately 20% of cases (~311) were related to content moderation.

When actors in this space make decisions about content moderation — that is, when they engage in “content governance” — they have the duty to consider human rights. Governments are obligated to protect these rights, while companies are responsible for respecting them.

To assist in this process, we have published 26 recommendations on content governance: a guide for lawmakers, regulators, and company policy makers. These human rights- and user-centric recommendations are summarized below and elaborated in full in the paper. Since the context is different for each country and region, our recommendations are not one-size-fits-all prescriptions. Instead, they are meant to serve as a baseline foundation for content governance that safeguards human rights.

We divide content governance into three main categories: state regulation, enforced by governments; self-regulation, exercised by platforms via content moderation or curation; and co-regulation, undertaken by governments and platforms together through mandatory or voluntary agreements.

Note that the following recommendations have been summarized; for detailed context and guidance, please refer to the full paper.

State regulation: 13 content governance recommendations

1. Abide by strict democratic principles
A formal legal instrument must contain protective safeguards that are established through a democratic process that respects the principles of multistakeholderism and transparency. They must be proportional to their legitimate aim.

2. Enact safe harbors and liability exemptions
Intermediaries should be protected from liability for third-party content by a safe harbor regime; however, we oppose full immunity. Rules that protect intermediaries must enable ways to address the spread of illegal content.

3. Do not impose a general monitoring obligation
A general monitoring obligation is a mandate that state actors directly or indirectly impose on intermediaries to undertake active monitoring of the content and information that users share. This violates the right to freedom of expression and access to information.

4. Define adequate response mechanisms
In order for the response to illegal content to be adequate and protect human rights, response mechanisms should be defined in national legislation, be tailored to specific categories of content, and include clear procedures and notification provisions, including notification to users acting as content providers.

5. Establish clear rules for when liability exemptions drop
A legal framework should establish when and how online platforms are understood to have “actual knowledge” of illegal content on their platform (such as upon receipt of a court order).

6. Evaluate manifestly illegal content carefully and in a limited manner
Content is manifestly illegal when it is easily recognizable as such without any further analysis, such as child sexual abuse material. Only a small percentage of content falls under this category, but all illegal content requires independent adjudication to be considered as such, and governments must be careful not to broaden the definition, to avoid opening the scope of censorship. The only situation when platforms should be held liable for not removing such content without an order from an independent adjudicator is after a private notification by a third party.

7. Build rights-respecting notice-and-action procedures
Notice-and-action procedures are mechanisms online platforms follow for the purpose of combating illegal content upon receipt of notification. To avoid broad censorship of context-dependent user-generated content, we suggest different types of notice-and-action mechanisms depending on the type of content being evaluated.

8. Limit temporary measures and include safeguards
The temporary blocking of content must be used only when it is time sensitive matter, and blocking must be strictly limited in time and constrained to specific types of illegal content. Those requirements should be clearly defined by law. This is so, in order to avoid state abuse of this tool to restrict access to information without an appropriate procedure for the determination of its illegality.

9. Make sanctions for non-compliance proportionate
If sanctions become disproportionate – such as the blocking of services or imprisonment of platform representatives – it is very likely that they will lead to over-compliance, which could harm free expression and access to information shared on online platforms.

10. Use automated measures only in limited cases
Due to the huge amount of online content being shared on platforms, the use of automated measures to detect illegal content is often necessary. However, the technology for these measures is not capable of interpreting context before flagging for blocking or takedown. Consequently, the use of automated measures should be strictly limited in scope, be based on clear and transparent regulation, and must include appropriate safeguards to mitigate their possible negative impact on users’ human rights.

11. Legislate safeguards for due process
To provide legal certainty, predictability, and proportionality in content takedown measures, it is essential to ensure a process for well-founded decision-making, notifications, and counter-notifications before any action is taken.

12. Create meaningful transparency and accountability obligations
Regulators cannot properly monitor the implementation and impact of content governance laws if states and intermediaries do not issue transparency reports. These reports should focus on the quality of adopted measures, instead of the quantity of content removed from platforms.

13. Guarantee users’ rights to appeal and effective remedy
Errors are inevitable in content governance decisions; therefore, appeal mechanisms, including the option of counter-notices for content providers, are the principal guarantee of procedural fairness.

Self-regulation: 10 content governance recommendations

1. Prevent human rights harms
Platforms must bake in human rights protections to any new policies from the beginning and consult third-party human rights experts regularly.

2. Evaluate impact
Platforms should perform participatory and periodic public evaluations of content moderation and curation decisions, which includes sharing information proactively with researchers and civil society.

3. Be transparent
All content moderation and curation criteria, rules, sanctions, and exceptions should be clear, specific, predictable, and properly informed to users in advance. This includes obtaining valid and informed consent from users regarding the rules that govern their activities on the platform.

4. Apply the principles of necessity and proportionality
The sanctions platforms impose on users for violating content moderation rules should take effectiveness and the impact on user rights into consideration (necessity) and be proportional to the harm being addressed. Banning a user should be a measure of last resort.

5. Consider context
Platforms should not apply content moderation rules in a “one-size-fits-all” fashion. In addition to human rights principles, it is essential to take social, cultural, and linguistic nuances into account when making content moderation decisions.

6. Don’t engage in arbitrariness or unfair discrimination
The application of context-based, nuanced content moderation decisions should be as coherent, systematic, and predictable as possible in order to avoid arbitrariness that may unfairly target marginalized communities.

7. Foster human decision-making
Platforms should minimize automated decision-making, since humans are best suited to evaluate context, and provide users with the right to request a human review of their case.

8. Create notice-and-review mechanisms
Online platforms should notify users when a moderation decision has been made about their content or speech, and the notification should contain the requisite information to ask for a review of the decision.

9. Provide remedies
Platforms should provide effective remediation to users, such as through restoring eliminated content, when content moderation rules have been applied erroneously or excessively.

10. Engage in open governance
To improve the assessment of human rights risks, platforms should invite the participation of users and other interested parties in the governance of its applications and services.

Co-regulation: 3 content governance recommendations

1. Adopt participatory, clear, and transparent legal frameworks
In order to enable the necessary accountability mechanisms, co-regulatory models should be grounded in a binding legal framework that state actors adopt to provide safeguards for users.

2. Don’t shift or blur the responsibilities of actors
Governments should not permit or encourage private actors to decide upon the legality or restriction of user-generated content.

3. Prevent abuse
States should avoid any action that may lead to the abuse of co-regulatory measures, such as use of these measures to impose a general content monitoring or encourage intermediaries’ over-compliance and over-removal of user-generated content.