Illustration by: Shutterstock/Cienpies Design

Emilio Ferrara fuses communications with computer science to track down social media manipulators

Emilio Ferrara has become a groundbreaking scholar at the intersection of data science, communications and politics. During the 2016 presidential election cycle, Ferrara’s analysis showed how automated bots were shaping political debates on social media — research that was later cited in federal investigations into Russian interference in that election.

During last year’s elections, Ferrara — who has been on the faculty of the department of computer science at USC Viterbi since 2015 and joined USC Annenberg in 2020 as an associate professor, now holding a dual appointment — deepened his examination of political discourse on social media, this time focusing on Twitter and how bots there were amplifying right-wing conspiracy theories.

Examining such intersections between technology and communications is what drew Ferrara to USC Annenberg. “Communication offers a lens to understand the fundamental phenomena that govern our society and our behaviors,” he said. “Engineering, computer science, and AI offer a wealth of methodologies, tools, and computational frameworks to study these issues. Joining Annenberg allows me to pursue these new avenues.”

Since Fall 2020, Ferrara has taught two graduate-level USC Annenberg courses: COMM 557, “Data Science for Communication and Social Networks,” and COMM 647, “Network Society.” He noted that students in those classes “have very insightful takes on timely topics and are able to draw new connections that enrich my own views.”

A fascination with how new communication technologies shape human lives is a consistent theme in Ferrara’s academic career, which began in his native Italy. “I was an early adopter of the internet,” he notes. “I’ve always been attracted by anything around emerging technologies, in learning how things work. Not only the artifacts, the computer systems, but also the humans. How do these technologies connect us and affect the fabric of society?”

Ferrara spoke with USC Annenberg about how his own journey through computer science and communication has informed his research into how these disciplines can be used to manipulate the public — and how the public can fight back.

Emilio Ferrara.
Photo courtesy of Emilio Ferrara
Where did your fascination with human-computer interaction get started?

My master’s degree dissertation was about creating an online game where people built their own cities and then interacted with each other. I became fascinated not so much with the coding challenges and programming challenges, but with really looking at the behavior of people online. How did players use this platform, how did it change the social contract?

You’ve also focused on how nonhuman actors — bots — can distort and manipulate debate in the public sphere. Where did your interest in how social media manipulation begin?

My interest in developing computational tools to understand network manipulation started with criminal networks. I was born and raised in Sicily, so, a problem near and dear to my heart was studying Mafia syndicates, criminal gangs — finding ways to model their activity. I realized that many of the techniques we developed in the space of criminal network analysis could be repurposed and further developed to identify the signatures of other malicious actors online. So, when we were getting closer to the [U.S] election in 2016, we saw with the Brexit vote in the United Kingdom there was a suspicious effort to distort social media. Other researchers had posted findings in regard to Brexit, but it didn’t seem like people were paying attention to the 2016 election in the United States.

After researching the ways in which social media bots were influencing the political debate before the election, what conclusions were you able to draw?

There was clearly a lot of problematic activity. We estimated that around 15% of the accounts engaged in the political discussion about the election were probably automated, contributing a few million tweets to political discussions on social media. One thing we highlighted was the fact that human users were repeating bot-generated content almost at the exact same rate that they retweeted human-generated content — and this might have contributed to phenomena like the spread of false news or conspiracies.

Building on that work, in the run-up to the 2020 election you focused your research on political activity on Twitter. What role did bots play that time around?

We saw that bots were used to amplify certain stories — particularly, stories that were associated with conspiracy theories, specifically the QAnon conspiracy theory. I think the most important finding here is that bots exacerbate the consumption of content within political echo chambers, so they increase the effect of pre-existing political polarization.

What can individuals do to avoid being manipulated by social media bots?

The vast majority of misinformation, conspiracies, rumors and false news spread because of human users. So, it is always important before sharing something online to assess the sources of this information. Some political bot content is created to divide us, to create chaos and increase the level of anger and engagement into these political conflicts.

What role should government and industry play in protecting the public?

Ultimately, a joint effort between government, industry, social media service providers, as well as academic groups, is really the only solution to overcome these types of problems. There is not one single stakeholder who can singlehandedly find, propose and implement a bulletproof solution that will forever address these problems. The continuous attention and work of both the research community and practitioners alike is required to fight social media manipulation and abuse.