Facebook and Twitter tried to stop voter intimidation… and they failed miserably
Voter intimidation
On Oct. 20, registered Democrats in Florida, a crucial swing state, and Alaska began receiving emails purportedly from the far-right group Proud Boys. The messages werefilled with threatsup to and including violent reprisals if the receiver did not vote for President Trump and change their party affiliation to Republican.
Less than 24 hours later, on Oct. 21, U.S. Director of National Intelligence John Ratcliffe and FBI Director Christopher Wray gave a briefing in which theypublicly attributedthis attempt at voter intimidation to Iran. This verdict was latercorroboratedby Google, which has also claimed that more than 90% of these messages were blocked by spam filters.
Therapid timingof the attribution was reportedly the result of the foreign nature of the threat and the fact that it was coming so close to Election Day. But it is important to note that this is just the latest example of such voter intimidation. Other recent incidents include arobo-call schemetargeting largely African American cities such as Detroit and Cleveland.
It remains unclear how many of these messages actually reached voters and how in turn these threats changed voter behavior. There is some evidence thatsuch tactics can backfireand lead to higher turnout rates in the targeted population.
Disinformation on social media
Effective disinformation campaigns typically havethree components:
The advent of cyberspace has put the disinformation process into overdrive, both speeding the viral spread of stories across national boundaries and platforms with ease and causing a proliferation in the types of traditional and social media willing to run with fake stories.
To date, the major social media firms have taken a largely piecemeal and fractured approach to manage this complex issue. Twitter announced aban on political adsduring the 2020 U.S. election season, in part over concerns about enabling the spread of misinformation. Facebook opted for a morelimited ban on new political adsone week before the election.
The U.S. has no equivalent of theFrench lawbarring any influencing speech on the day before an election.
Effects and constraints
The impacts of these efforts have been muted, in part due to the prevalence ofsocial botsthat spread low-credibility information virally across these platforms. No comprehensive data exists on the total amount of disinformation or how it is affecting users.
Some recent studies do shed light, though. For example, one2019 studyfound that a very small number of Twitter users accounted for the vast majority of exposure to disinformation.
Tech platforms are constrained from doing more by several forces. These include fear of perceivedpolitical biasand a strong belief among many, including Mark Zuckerberg, in a robust interpretation offree speech. A related concern of the platform companies is that the more they’re perceived as media gatekeepers, the more likely they will be to face new regulation.
The platform companies are also limited by the technologies and procedures they use to combat disinformation and voter intimidation. For example, Facebook staff reportedlyhad to manually interveneto limit the spread of a New York Post article about Hunter Biden’s laptop computer thatcould be part of a disinformation campaign. This highlights how the platform companies are playing catch-up in countering disinformation and need to devote more resources to the effort.
Regulatory options
There is a growing bipartisan consensus that more must be done to rein in social media excesses and to better manage the dual issues of voter intimidation and disinformation. In recent weeks, we have already seen the U.S. Department of Justice open a newantitrust caseagainst Google, which, although it is unrelated to disinformation, can be understood as part of a larger campaign to regulate these behemoths.
Another tool at the U.S. government’s disposal isrevising, or even revoking,Section 230of the 1990s-era Communications Decency Act. This law was designed to protect tech firms as they developed from liability for the content that users post to their sites. Many, including former Vice President Joe Biden,argue that it has outlived its usefulness.
Another option to consider is learning from the EU’s approach. In 2018, the European Commission was successful in getting tech firms to adopt the “Code of Practice on Disinformation,” which committed these companies to boost “transparency around political and issue-based advertising.” However, these measures to fight disinformation, and the related EU’s Rapid Alert System, have so far not been able to stem the tide of these threats.
Instead, there are growing calls to pass a host of reforms to ensure that the platforms publicize accurate information, protect sources of accurate information through enhanced cybersecurity requirements, and monitor disinformation more effectively. Tech firms, in particular, could be doing more to make it easier to report disinformation, contact users who have interacted with such content with a warning, and take down false information about voting, as Facebook and Twitter have begun to do.
Such steps are just the beginning. Everyone has a role in making democracy harder to hack, but the tech platforms that have done so much to contribute to this problem have an outsized duty to address it.
This article is republished fromThe ConversationbyScott Shackelford, Associate Professor of Business Law and Ethics; Executive Director, Ostrom Workshop; Cybersecurity Program Chair, IU-Bloomington,Indiana Universityunder a Creative Commons license. Read theoriginal article.
Story byThe Conversation
An independent news and commentary website produced by academics and journalists.An independent news and commentary website produced by academics and journalists.
Get the TNW newsletter
Get the most important tech news in your inbox each week.