Skip to main content

Flagged: Facebook Oversight Board Will Not Be Operational Until After The Election

Here is a snapshot of the September 30 copy of Flagged, our email newsletter looking at disinformation. Want to receive Flagged in your inbox every Wednesday? Sign-up below!

This Week in Disinformation:

  • Facebook Oversight Board will not be operational until after the election.
  • Facebook removed Chinese and Russian disinformation networks.
  • DOJ pitched legislation to Congress to reform internet platforms’ intermediary liability.

Election Policy Updates

As the election draws nearer, social media platforms are updating their policies in preparation  for disinformation and potential controversy.

  • Facebook announced that it will reject ads that prematurely declare the winner of the 2020 election. As record numbers of voters cast mail-in ballots due to COVID-19, the final tallies may not be known until after November 3rd. Though it will block ads that prematurely declare a winner, Facebook still will not block election-related ads after the polls close, so long as they were purchased more than a week prior to the election.
  • Google will “broadly block election ads after polls close” on Election Day, the company informed advertisers on Friday. This policy will remove ads running through Google’s main ad platforms, including Google Ads and YouTube, which reference “state or federal elections, candidates, ballot measures, political parties or elected officials, as well as ads placed against election-related searches.”
  • Twitter announced that it’s rolling out a “read before you retweet” prompt, which will (hopefully) encourage users to read an article before they retweet a link to it. While not an election-specific policy, Twitter’s testing of this feature showed that 40% more users open articles after seeing the prompt, potentially reducing the spread of low-quality information.

Deep Dives

A closer look at important news about disinformation and efforts to counter its spread.

While Facebook promised its Oversight Board--an independent Supreme Court-like body which would rule on content moderation disputes--would be operational this summer, it will not be up and running until mid to late-October. Critics complain that the board’s 90-day adjudication process ensures that it will be useless until after the election.The backlash to the Board’s slow rollout has been considerable. Twenty-five outside experts from civil rights, academia, journalism, and politics formed their own alternative board which they dubbed the Real Facebook Oversight Board. The group will hold public meetings to draw attention to Facebook’s content moderation decisions and policies. Other critics have speculated that the board’s slow rollout, and even the board itself, are merely intended to provide Facebook with political cover on harsh decisions. Daphne Keller of Stanford University’s Cyber Policy Center said, "It's hard not to look at it, especially the way it has been crafted and slow-walked and sort of handcuffed in a sense, as something of a play to stave off harsher government regulation or oversight.” The Oversight Board has some severe limitations: it only decides the fate of posts which moderators have taken down, not those which moderators permit to remain up or which evade moderation entirely. Given that Facebook has routinely failed to identify disinformation on its platform--let alone remove it--the Board’s PR seems to outpace its potential benefit in fighting harmful content. 

Last week, Facebook removed several Russian and Chinese disinformation networks promoting their national interests both inside and outside of the US.The platform removed three Russian networks: two tied to the Russian intelligence agencies and one to the notorious IRA. While the networks did not directly target the US election and reached primarily European and Asian audiences, they were associated with actors who have interfered before--namely those involved in the 2016 DC leaks. One of the groups focused primarily on Syria and Ukraine, where Russia has military interests, driving users off-platform to junk news websites that dump purported leaks. One of the groups aimed to sway American black voters and critique Democratic Candidate Joe Biden, according to Graphika.

Facebook also removed a Chinese influence network which was composed of fake accounts with AI-generated profile pictures. It pushed propaganda aimed at Taiwanese and the Philippinian  audiences about naval activities in the South China Sea. Only a small portion of the activity aimed at the US election--in 2019, only three groups related to the Biden and Trump campaigns that had fewer than 2,000 followers collectively. This is the first time Facebook has ever removed Chinese coordinated inauthentic activity targeting US politics and it showcases China’s budding interest in engaging in election interference in the US, although their current efforts remain small-scale compared to those of Russia.

These takedowns represent how State-sponsored actors use social media tools to not only to interfere in US elections, but to shape public opinion beyond their borders in ways that aid their foreign policy interests and objectives.

In Case You Missed It

A snapshot of headlines and story updates from this week..

The FBI and Cybersecurity and Infrastructure Security Agency (CISA) issued another public service announcement (PSA) on Monday. The agencies alerted the public that foreign actors and cyber criminals are spreading disinformation across multiple internet platforms about purported cyberattacks on voter registration databases and voting systems. The FBI and CISA clarified that  there have been no successful cyber attacks on election infrastructure, and that cyber actors have acquired US voter information before without impacting the electoral process, as it is a matter of public record and readily accessible for purchase or by request. 

UPDATE: A federal judge in the District of Columbia blocked the Trump administration’s ban on new downloads of TikTok on Apple and Google app stores hours before it was set to start. Department of Justice lawyers argued that the app poses a significant national security risk, while attorneys for TikTok contended that a ban would restrict a form of First Amendment speech. The DOJ argued in court documents that a block the ban would “infringe on the President’s authority to block business-to-business economic transactions with a foreign entity in the midst of a declared national-security emergency.” It also accused ByteDance’s CEO of being “a mouthpiece” for the Chinese Communist Party. A federal judge in northern California struck down the Trump administration’s ban on WeChat, the popular Chinese messaging app, on the grounds that it violates free speech protections. 

The Department of Justice pitched legislation to Congress that would reform the safe harbor provided to internet platforms under Section 230 of the 1997 Communications Decency Act. DOJ’s proposal would narrow the categories of content for which platforms can claim legal immunity in their moderation decisions, and revoke their immunity when platforms fail to act in good faith. DOJ has been considering reforms to Section 230 for almost a year now, and revised previous drafts in response to feedback from stakeholders. The legislation is unlikely to pass during an election year, but drafts resembling it may surface in Congress next year given that reconsideration of Section 230 is now an agenda item for both Republicans and Democrats, as well as the Trump and Biden campaigns.

  • Disinformation Fellow Nina Jankowicz joined the Checks and Balances Podcast with former CIA analyst Cindy Otis to discuss disinformation threats to the upcoming 2020 election. 

Science and Technology Innovation Program

The Science and Technology Innovation Program (STIP) serves as the bridge between technologists, policymakers, industry, and global stakeholders.  Read more