Facebook is allowing its oversight board to rule on moderation decisions relating to content that remains on its Facebook and Instagram platforms. Previously, the only rulings the board could issue were to restore content that moderators had removed.
The Facebook Oversight Board was established last year in response to concerns that the social media company wielded too much unchecked power over what content appeared on its site. Current board members include several law professors, executives from think tanks and nongovernmental organizations, a former US federal circuit judge, and the former prime minister of Denmark. The board is managed by an independent organization that the company seeded with $130 million.
To appeal a post, a person must have an active Facebook account and must have exhausted the company’s appeals process. At that point, the user can take their petition to the oversight board.
In the six months since it was founded, the board has made eight decisions, basing its rulings on Facebook’s policies and overruling the company in five of them. The board’s charter says that its rulings are binding “unless implementation of a resolution could violate the law.” Since Facebook is available in every country except those that officially ban it, including China, it’s unclear which laws those include.
CEO Mark Zuckerberg initially floated the board idea back in November 2018, writing in a blog post that the board would be in place by the end of 2019. The board heard its first case on October 22, 2020, two years after being announced. Board members were selected by Facebook, confirmed by a board of trustees (who were also selected by Facebook), and serve for a maximum of two three-year terms. Many board members appear to have other employment, though they reportedly receive six-figure salaries for around 15 hours of work per week, according to The New Yorker.
Today, the board has 19 members with the potential to expand to 40. The size of the board, even when at full capacity, provides a sharp contrast to the volume of potential appeals it could be presented with. In the second quarter of 2020, Facebook removed 22.5 million posts for violating the company’s hate-speech policy. Now that the company is opening the process to all content on its two biggest platforms—Facebook alone has over 1.8 billion daily active users—the caseload will certainly grow.
Neither Facebook nor the oversight board has disclosed how many removals have been appealed. But based on reported figures, the board has ruled on around 0.003 percent of posts moderated for violating hate speech policies. (That percentage would almost certainly be lower if it included the number of posts moderated for reasons that do not include hate speech.)
Comparing those numbers with the federal appeals court, which the board is reportedly modeled after, reveals the scope of the challenge. The US Court of Appeals, which has 179 congressionally authorized judgeships in 13 circuits, heard 50,258 cases last year, equal to about 12 percent of the caseload at the district level. (Those numbers aren’t perfect, since appeals to district decisions don’t always occur in the same year.)
Facebook appears to be hoping that automation can expand the oversight board’s decisions to other posts it thinks apply. When that happens, the company says it will “take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well.” It’s unclear if any moderation that results from automated applications of decisions can be appealed to the oversight board.