The Facebook Oversight Board Flex Its Muscles
A private body within the IHRL system?
Several decisions from Facebook, Twitter, Twitch, Snapchat and others to suspend or ban President Trump from their platforms prompted a discussion on the limits of free speech online and the role that social media companies should play in moderating content.
However, Facebook’s decision stands out due to the possibility of review by an independent board. As Evelyn Douek noted, the decision to ban Trump can (and, in her opinion, should) be reviewed by the Oversight Board, an “international body” with the capacity to refer to international instruments (international human rights law) to justify its decisions.
This blog post highlights the road to the creation of the Oversight Board and how IHRL will be used as a core component of the body’s decision-making process. Finally, it questions whether it would be appropriate to see it as a human rights body. In other words, what is the role that a private body may play in implementing and construing international law, in particular IHRL?
Move Fast and Break Things: Facebook and Content Moderation
Originally announced by CEO and founder Mark Zuckerberg as the "Facebook Supreme Court", the Oversight Board is the latest and most ambitious attempt by the social media company to regain users' trust and put a halt to the techlash movement that has engulfed the industry. The body was created to have a global reach and deal with “content moderation”.
During its first years of operation in the early 2000s, Facebook did not have content moderation as a primary concern. Instead, Zuckerberg had other plans for the company: to "move fast and break things". The disruption of democratic discourse was just a predictable side-effect of the revolution Facebook was bringing to the realm of human communication. The motto was to launch new features as soon as they were coded and fix potential issues later.
Fast forward to 2016 when Donald J. Trump was elected POTUS and Facebook (alongside other social media platforms) was bashed for hosting false content that disproportionately favored the GOP and for allowing a Russian disinformation campaign - headed by the Kremlin-sponsored Internet Research Agency (IRA) - to thrive on the platform.
Zuckerberg's initial response to the accusation that Facebook had a decisive impact on the result of the election was to call it "a crazy idea". One year later, in 2017, the CEO backtracked and said that he regretted calling it "crazy" and that the issue is "too important [...] to be dismissive". Facebook's reckoning with its social and democratic responsibility has been a real rollercoaster ride.
The company always had a vision for its product: to connect and give people a voice. The Arab Spring in 2011 offered a glimpse of what the social media platform was capable of when employed as a force for good. But just like the case with other new technologies, it all comes down to who is behind the wheel and what her intentions are.
Facebook was conscious about the extent of its powers, but, surprisingly, it turned a blind eye to the fact that, being an open platform, it could be used by third parties to hurt (and not only promote) democracy. After the election of Donald J. Trump, this became the prevailing narrative about social media amongst academics and journalists. Facebook, Twitter, and YouTube were (and still are) heavily criticized for not doing enough to keep their platforms safe for democracy. Critics argue that these companies should do a better job at moderating content. Furthermore, they should strive to remove false and hateful materials and curb the spread of borderline content.
Facebook claims that its algorithm now removes 95% of hate speech before users even see it, but how exactly this is done is still a mystery to people outside the company, for whom content moderation may resemble a "black box". What is more, there is a significant cleavage between Facebook's written community standards and the actual implementation of the company's rules for content moderation. In 2020, for example, Sophie Zhang, a former data scientist at Facebook, quitted her position and accused the company of ignoring cases of "Inauthentic Coordinated Behavior" in places like Honduras and Azerbaijan even though they checked all the boxes put forth by the company's own rules.
Enter the Oversight Board
It is against this turbulent backdrop that the Facebook Oversight Board started its operations and will hand down its first content moderation decisions in 2021. The Board will work as an "appeals court", reviewing Facebook's internal decisions and having the last say on selected cases. The company signed a commitment to the Board acknowledging that all decisions will be final and binding. Furthermore, Facebook guarantees that the Board, although established by the company in the first place, will have absolute independence throughout its decision-making process. The Board will also have control over its docket. Individual users may file appeals to the Board and Facebook itself will have the prerogative of referring cases for review.
Recently, the Board announced the first six cases that it will be hearing in the beginning of 2021. After a user deleted a comment that was the object of one of the six original cases, the Board took up an additional case to replenish its docket. Two cases deal with Facebook's "hate speech" policy, two with "violence and incitement", one with "adult nudity and sexual activity", and a final one with "dangerous individuals and organizations". The small number of cases should allow the Board to undertake a careful analysis of each individual case, justifying the decision and making content moderation more transparent. In other words, the Board will open Facebook's "black box" for everyone to peek inside.
Decisions based on international human rights law: what does that mean?
And here is where IHRL will make an appearance. The Facebook Oversight Board Chart acknowledges in its introduction that "freedom of expression is a fundamental human right". But it does not stop there. Article 2, Section 2 of the Charter states the Board's "basis of decision-making" and declares that "when reviewing decisions, the board will pay particular attention to the impact of removing content in light of human rights norms protecting free expression". To put it differently, one of the cornerstones of the Board's decision-making process will be the norms associated with IHRL particularly around the concept of free expression.
To some this may seem as another instance of the long international discussion, especially in the UN, on the relevance of IHRL for business or the role that businesses (particularly multinational corporations) should have in fostering and protecting human rights. To others it may be perceived as a new example of private entities playing roles that are typically reserved to states.
One first question is whether IHRL can be appropriately applied by a private body created by a private company. Human rights, at least most international human rights treaties, were created by nation-states to be applied vis-à-vis government actions. They were conceived as a framework of obligations that nation-states would have towards their population. They were not thought off as directly applicable to companies. 
True, corporations should respect people’s rights. Yet, traditionally this meant complying with national laws, which would themselves embody and implement human and fundamental rights. This had the benefit of providing more clarification to the very broad statements of rights in IHRL and other significant domestic bodies who issue guidelines on how to properly comply with such international standards.
Despite these conceptual challenges, there are those who strongly argued that IHRL is an “authoritative global standard” and should be the current benchmark for content moderation on social media (see Kaye, for instance). The argument is that companies have responsibilities concerning the implementation of human rights and that IHRL is better suited to deal with the global environment that internet companies helped create and now inhabit .
Furthermore, the UN Rapporteur on Minority Rights urged the Oversight Board to take minority rights into consideration as well. In a statement issued by the Rapporteur, the Board is depicted as “an innovative and ambitious initiative to regulate online expression, in particular hate speech, which is essential for the effective protection of vulnerable minorities worldwide.”
This not only places the Oversight Board in a position to actually interpret and implement IHRL but, most significantly, encourages it to do so. The members of the board will have, however, to face the challenges that this represents. The first is to find a conceptual framework that can address the complexity of interpreting IHRL in this novel context. Second, the Charter of the board has a focus on freedom of expression and, as argued elsewhere, this may potentially create a hierarchy of rights, which would run counter to more traditional interpretations of IHRL.
Besides more practical challenges, the concept of freedom of expression in international instruments tends to be very different from the one enshrined in the First Amendment to the US Constitution. Finally, a related issue is that the Board will have as a basis for its decisions Facebook's community standards which are not necessarily compatible with the standards of IHRL.
A private human rights body?
That being said, can the Oversight Board be understood as an international human rights body? These international institutions have been traditionally established by governments to interpret IHRL and to provide international mechanisms of redress to individuals that have experienced potential violations.
The Oversight Board shares some similarities with IHR bodies. It was established in order to act as an independent organization and designed to have the necessary leeway to hand down decisions even if that means acting against the express wishes or interests of its creator. Because it is positioned within the global context where social media platforms operate, the board's decisions will have an international reach.
However, they do have important differences as well. Probably the easiest one to spot is that the Board was established by an international company and, as a consequence, has a private nature. On the other hand, the most significant, probably refers to the source of its authority. IHR bodies ground their authority on an agreement between states (which grants them some political backing and clout). TheBoard, in turn, finds its authority grounded mostly on a pledge from Facebook. Whether this is enough to grant it a status akin to a human rights body is yet to be seen.
What is to come:
It is up to the members of the Board to develop a robust framework that may overcome the conceptual (and likely practical) challenges that it may have in implementing its decisions based on IHRL. The framework should not only showcase IHRL as a legitimacy tool. It should go beyond, exploring avenues to turn IHRL into an operational blueprint for the Board going forward.
Although we may not have all the answers at this point, discussing whether the Facebook Oversight Board is indeed a human rights body is relevant not only because it may impact the decision-making process of other (maybe not so private) institutions, but also because it may serve as a template for similar experimentations by other relevant internet companies in the future.
Rights and Technology Coordinator at Institute for Technology and Society (ITS Rio)
PhD Candidate in International Law at Rio de Janeiro State University
Master’s in International Law University of Cambridge
João Victor Archegas
Researcher at Institute for Technology and Society (ITS Rio)
Master in Law, Harvard University
 It should be highlighted that under some constitutional arrangements, fundamental constitutional rights (including internalized IHRL) may have direct horizontal effect. This is particularly the case in Germany with the doctrine of “Drittwirkung” which understands that such rights have an effect even in private relations. (See: KETTEMANN, Mathias. SETTING RULES FOR 2.7 BILLION: A (First) Look into Facebook’s Norm- Making System: Results of a Pilot Study. Working Papers of the Hans-Bredow-Institut, 2020. Available at: https://leibniz-hbi.de/uploads/media/Publikationen/cms/media/5pz9hwo_AP_WiP001InsideFacebook.pdf)