Tuesday, June 21, 2022
HomeEconomicsTechnology and the 2022 elections

Technology and the 2022 elections



In 2018, the Cambridge Analytica scandal shook the world as the public found out that the data of up to 87 million Facebook profiles were collected without user consent and used for ad targeting purposes in the American presidential campaigns of Ted Cruz and Donald Trump, the Brexit referendum, and foreign elections in over 200 countries around the world. The scandal brought unprecedented public awareness to a long-brewing trend—the unchecked collection and use of data—which has been intruding on Americans’ privacy and undermining democracy by enabling ever-more-sophisticated voter disinformation and suppression.

Digital platforms, massive data collection, and increasingly sophisticated software create new ways for bad actors to generate and spread convincing disinformation and misinformation at potentially massive scales, disproportionately hurting marginalized communities. With the 2022 midterm elections around the corner, it is important to revisit how emerging technologies serve to suppress voting rights, and how the U.S. is going about the protection of such democratic ideals.

How emerging technologies boost disinformation/ misinformation

There are several factors that enable the easy spread of disinformation and misinformation on social media platforms. The information overload of social media creates an overwhelming, chaotic environment, making it difficult for people to tell fact from fiction. This creates avenues for bad actors to spread disinformation, disproportionately hurting marginalized groups. Historically, such bad actors have intentionally spread disinformation on incorrect voting dates and polling locations; intimidation or other threats by law enforcement or people with guns at polling places; or messages exploiting common doubts among Black and Latino voters on the efficacy of political processes.

Social media algorithms, meanwhile, are engineered to provide users with content they are most likely to engage with. These algorithms leverage the large-scale data collection of users’ online activity, including their browsing activity, purchasing history, location data and more. As users regularly encounter content that aligns with their political affiliation and personal beliefs, this enables confirmation biases. In turn, this allows the spread and cementing of misinformation among given circles, cumulating in tensions that fueled both the Stop the Steal Movement after the 2020 U.S. presidential elections and the January 6 insurrection.

Microtargeting has also allowed the spread of disinformation, allowing both political entities and individuals to disseminate ads to targeted groups with great precision, using data collected by social media platforms. In commercial settings, microtargeting has come under fire for enabling discriminatory advertising, depriving historically marginalized communities of opportunities for jobs, housing, banking, and more. Political microtargeting, meanwhile, has experienced similar scrutiny, especially due to the limited monitoring of political ad purchases.

Geofencing—another method of data collection to enable further microtargeting, has also been used by political campaigns to capture when individuals enter or exist in certain geographically prescribed areas. In 2020, the technology was used at a church by CatholicVote to target pro-Trump messaging towards churchgoers, collecting voters’ religious affiliations without notification and consent. This opens up a new avenue of data collection that can also be utilized by algorithms and microtargeting technologies.

Automation and machine learning (ML) technologies also exacerbate disinformation threats. Relevant technologies include everything from very simple forms of automation, like computer programs (“bots”) that operate fake social media accounts by repeating human-written text, to sophisticated programs that draw on ML methods to generate realistic-looking profile pictures for fake accounts or fake videos (“deepfakes”) of politicians.

None of this is new, but what makes this worse?

It is important to recognize that many of these technologies are simply modernized, digital methods of political behaviors that have been previously used by candidates to gain strategic advantage over one another. It is not uncommon, for example, for politicians to switch up their rhetoric used in television advertisements or campaign speeches to attract a range of demographics. First Amendment protections also allow politicians to lie about their opponents, putting the onus on voters to evaluate what they hear on their own. The disenfranchisement of minority voters is also an issue that dates far before the existence of the internet, going back to U.S.’s history of Jim Crow laws to changes to the Voting Rights Act of 1965, to modern-day felony disenfranchisement, voter purges, gerrymandering, and inequitable distribution of polling stations.

However, there are several factors that make emerging campaigning technologies additionally effective and harmful. The first is that these technologies are universally accessible at low or no cost. That means that these tools could be employed and manipulated by anyone within or outside the US to target protected groups and undermine the sanctity of the American democracy. For example, during the 2016 presidential elections, Russian propagandists used social media to suppress Black votes for Hillary Clinton to aid Donald Trump.

A second factor is the unfettered data collection essential for the use of microtargeting technologies. Voters are often unaware and have little control over the kinds of data collected about them—be it their purchase history, web searches, or the links they’ve clicked on. Voters thus also have very little control over how they have been profiled by social media and how that impacts the content they see on their feeds, or how what they see compares with other users. Meanwhile, microtargeting technologies provide political actors and other agents extensive access to voter data on race, political affiliation, religion, and more, to hone their messages and maximize effectiveness.

How to proceed

In response to growing concern over electoral disinformation, the U.S. government has worked to establish systems to protect election security. The U.S. Department of State’s Global Engagement Center seeks to proactively address foreign adversaries’ disinformation attempts; and the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency, works collaboratively with election frontline workers to protect America’s election infrastructure. More recently, there was the creation of the short-lived Disinformation Governance Board, whose work was put on hold after public backlash.

Meanwhile, Congress has also made multiple attempts to combat social media’s algorithmic amplification of fake news and political microtargeting, taking for example the Banning Microtargeted Political Ads Act, the Social Media NUDGE Act, various calls to reform Section 230 and more. While bipartisan disagreements over definitions of disinformation and misinformation have continuously hindered further progress, it is integral for Congress, technology companies and civil rights activists to work together in combatting these challenges to our democracy. Below are some actions that could be taken to combat the aforementioned challenges:

1. Voter protections should be extended to the online space.

Under federal law, in-person voter intimidation is illegal. Under Section 11 of the Voting Rights Act, it is illegal to “intimidate, threaten, or coerce” another person seeking to vote. Section 2 of the Ku Klux Klan Act of 1871, meanwhile, makes it illegal for “two or more persons to conspire to prevent by force, intimidation, or threat” someone voting for a given candidate. The definition of voter intimidation extends to the spread of false information or threats of violence.

Such protections should also be extended to the online space. As part of H.R. 1 – For the People Act of 2021 that had been struck down in Senate in 2021, one of the legislative reforms proposed include the expansion of platform liability by criminalizing voter suppression. The passage of such a reform would make it a federal crime to conduct voter intimidation or distribute disinformation about voting time, place and other details online.

2. A federal privacy framework can quell unfettered access to consumer data.

The lack of federal privacy legislation enables the unmitigated data collection that allows microtargeting and algorithms to discriminate based on protected characteristics. With the recent unveiling of the American Data Privacy and Protection Act, Congress takes a step towards instituting much-needed privacy legislation. Most importantly, the bill prohibits the collection and use of data for discriminatory purposes. More generally, the bill also establishes organizational requirements for data minimization, enhanced privacy protections for children, and a limited private right of action. The passage of this bill would be integral in improving online protections for voters.

3. There needs to be better accountability mechanisms for big tech companies.

There has been little oversight over how tech companies have handled the many problems of disinformation and privacy infringements. Over the years, scholars and civil rights organizations have repeatedly flagged instances where tech companies have failed to remove misinformation or incitements of violence in violation of the company’s own policies.

Going into the 2022 elections, platforms continue to establish and execute their own policies on misinformation, microtargeting and more. As of now, Twitter has completely banned political ads from its platform. Facebook, meanwhile, had a ban on political advertising after the 2020 presidential election but has since then resumed, though they have maintained bans on advertising targeting sensitive attributes. Spotify recently brought back political ads after a two-year ban.

Disinformation and misinformation are cross-platform problems, and coordinated approaches are necessary to comprehensively address the problems we face. Brookings scholar Tom Wheeler has proposed the creation of a focused federal agency that complements the ongoing work of the Department of Justice and Federal Trade Commission, with the ultimate aim of keeping technology companies accountable to protecting public interests. Such a digital agency would spearhead standard-setting activities in defining the steps social media companies should take to mitigate platform misinformation, prevent privacy abuses and more. This establishes means for external oversight and increases the need for public accountability among social media companies.

Conclusion

With the 2022 elections around the corner, the same issues over the algorithmic amplification of disinformation and misinformation and microtargeted political ads will once again resurface. Much work remains to be done for the U.S. to rise to the challenge of protecting the integrity of our elections.


Meta is a general, unrestricted donor to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and not influenced by any donation.

Thanks to Mauricio Baker for his research assistance.



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments