Jennifer Petersen.
Photo by Olivia Mowry

‘How Machines Came to Speak’

As a communication scholar, Jennifer Petersen has long focused on the history of media technologies as they relate to free speech, focusing specifically on how such technologies shape how judges and justices interpret the First Amendment. 

“The questions of speech and the press were very simple in the 18th and 19th century,” said Petersen, associate professor of communication. “But with the advent of cinema, radio, television and then, later, with the start of the internet, you really begin to have these conversations about, ‘what is speech?’”

It was a 1943 court case centered around two students, both Jehovah’s Witnesses, who refused to salute the American flag that started Petersen on her research trajectory. 

“The case came about in the context of World War II when a lot of states were passing laws that required students to salute the flag in class to show their loyalty to country and to foster unity,” Petersen said. “The students didn’t want to salute the flag because it was a violation of their religion and constituted worshipping a graven image and were thus kicked out of school.”  

The case went to the Supreme Court and the justices determined that the gesture of saluting the flag was a form of utterance. For Petersen, this is an important distinction because it established a precedent for an action being considered speech. 

Fascinated by exploring the past and future of the legal category of speech, Petersen turns to examining new technologies, including film, radio and computer, as the subject of her latest book. She spoke with USC Annenberg about How Machines Came to Speak: Media Technologies and Freedom of Speech from Duke University Press.

How Machines Came to Speak, published by Duke University Press.

The examination of cases in the book is structured around three different “new” technologies: film, radio and computer code. What’s the common thread running through these discussions?

The questions are similar. For example, do silent moving images that are animated by a machine count as speech? This is one of the things people were concerned about in films in the past, as some people thought — at the time — that the playing of a film had a direct visceral physical effect on their audiences, more a form of physical stimulation than of public opinion. We have something similar in computer code, a return to concerns about what’s the boundary of action versus thought. What is actually a human expression or human thought, and what is automation and action being done by the machine? There’s a lot of similarities for the kind of philosophical issues involved with trying to decide whether computer code is speech or not, as there was with film. In the first, the boundaries of speech are being defined in contrast to bodily action (often, of the masses). And in the latter, speech is defined in contrast to automation. The opposite of speaking or having voice in culture or politics goes from “brute” physicality to automated computational processes.

How have these arguments expanded in terms of the Internet?

The way we understand the “speech” aspect of freedom of speech has been influenced by our everyday experiences of media and how we communicate via technologies. That’s part of the important trajectory I’m tracing in the book — how these technologies have in many ways shaped our legal freedoms. Right now, we have technologies where we’re having a very hard time figuring out exactly who is speaking. For example, recommendation engines or bots, all these new types of forms of expression that we’re seeing use algorithmically produced “utterances.” This is something we should now be tracking and thinking about. 20 or 30 years down the line we’re probably going to have different conceptions of speech in the law based on this technology.

As you were researching, is there anything that came up that surprised you?

There’s so much that surprised me! One thing that I studied was cases about search engines and free speech and whether search results are considered speech focused entirely on, for example, the speech of Google, as a search engine. If the search results are Google speech, are they then not the speech of the people who created the websites? And of course, they are, but that doesn’t come up in any of these legal discussions. And I think that’s kind of surprising to a lay person when we think about search results in the current legal cases and discussion of search engine speech. 

With all the controversy of who is responsible for what shows up in an algorithm search, how are you seeing that play out? 

I think these are also really interesting moments because these companies are trying to have it both ways. They’re saying, on the one hand, this isn’t our speech, right? Especially when they’re confronted with the kind of critique that might be misrepresenting a person or persons. They are saying, “We are just a conduit, we’re just giving you what you want.” But, on the other hand, in a different set of venues, like when companies were suing them [Google] for anti-competitive business practices they were saying, “Oh no, you can’t do that because the search results are our speech. We are like newspaper editors, and these are our selections.” At the very same time, they’re [Google] kind of making opposing claims about whether or not they’re a speaker in different types of controversies or disputes.

What do you hope readers walk away knowing after reading your book?

I hope that readers walk away with a deeper appreciation of the way that our interactions with our devices shape our understandings of what it means to speak to one another, to participate in the public sphere, and the ways in which mundane objects and definitions shape law.


Additional reporting by Ted Kissell