Photo of Jennifer Petersen
Jennifer Peterson.
Photo by: Olivia Mowry

Jennifer Petersen examines the history of free speech law, from corporate personhood to AI

One of USC Annenberg’s newest faculty members, associate professor of communication Jennifer Petersen, wasted no time getting into the classroom — both physical and virtual. Fresh off her arrival from her previous position at the University of Virginia, she taught the graduate-level “Feminist Theory and Communication” course in Fall 2019. This Spring, she taught the undergraduate “Social and Economic Implications of Communication Technologies” course, first in-person, and then via Zoom as all USC instruction moved online in response to the COVID-19 pandemic.

An expert on the media, law, technology, and the intersection of those fields, Petersen is working on her second book, How Machines Came to Speak, contracted by Duke University Press, which explores the history of free speech law in the United States from the early 20th century to the present. She explains that most books about this period have assumed a kind of “progress narrative,” emphasizing the expansion of the right to free speech over the years. “I have been interested in telling a sort of different story than is typically told about this,” she said. “I’m interested in looking at how judges and justices have understood speech or expression, and how that's been impacted by changes in media technologies.”

In this interview, Petersen shares her conclusions about what the evolving definition of speech implies for future cases ever-more-complex nonhuman “speakers.”

What are the most important ways that the Supreme Court’s definition of speech has changed over the years, and what have been the lasting effects of that change?

Getting up and speaking or writing something down was always considered speech, but more embodied actions, like protests, weren’t always considered speech. Now, speech has become so broad as to include almost any transfer of information — and this more capacious definition of speech becomes a mechanism to combat economic regulation.

One of the best-known examples of a pro-business argument for free speech is Citizens United vs. Federal Elections Commission, which held that corporate campaign contributions are speech. How did that case factor into your research?

Citizens United is a major reason I started doing the book. I was like, “How can money be speech?” It was part of what inspired me to be interested in this project and to think about, how is speech defined, and what does the Supreme Court mean when they talk about speech? That led me to think back about earlier technologies — especially radio — and their impact on speech. I realized there was just an amazing and under-explored trove of rationales and cases.

How are these precedents playing out now when it comes to newer technology that generates “speech?” Are things like algorithms, digital assistants and artificial intelligence considered “speakers”?

Legal scholars have debated whether even less-“speechy” things like car alarms and things like that would be considered speech under our current legal structure. They’ve pointed out that the direction of legal reasoning opens up this possibility, whether or not it would actually happen. Now, I don’t think justices are actually going to go for that — they’ll find ways not to do it. And some legal scholars have more prescriptive arguments about how to draw the line between information transfers that are speech and those that are merely machine signals.

In the case of algorithms like those in search engines or social media, you have humans involved both in writing the algorithms, and as users whose searches shape subsequent search results. So, if this is speech, whose speech is it?

Right: We have this example of distributed speech that there is really no model for in the law, as far as I can tell, and I think it’s very troubling. Who is a speaker? Who is responsible for speech online? These examples fit incredibly poorly in our legal structure. Whatever happens in the next few years, the next decade, is really going to be interesting. And it may be that we have a very poor fit between the law and actual technologies and social practices.

What role can your research play in resolving these questions?

I hope to get people looking at not only the issue of rights, but also the ways in which speech, and different definitions of speech get mobilized strategically in different legal arguments. I think there’s going to be some fancy legal footwork to find reasons to halt the expansion of the definition of speech by artificial agents or intelligence. So, there can be speech without a speaker — previous cases have held that you don’t have to have a human being behind the speech. But in recent cases dealing with technology, they’re looking for human beings. When they deal with questions of corporate speech, there does not need to be an individual speaker, will or intent. But, in cases dealing with algorithmic speech, they are looking for individuals making human judgments.

So, the same line of reasoning that led to the separation of speech from speakers, also led to corporate personhood and very pro-business outcomes, right?

Yes, I call this a “post-human” approach to speech. It simplifies things and it allows justices not to think about who is speaking, whose speech rights are involved. This way of talking about speech that disarticulates it from the speaker makes it really easy to kind of forget the political economy of the expression. Now, when it comes to new technologies, they’re really interested in putting those speakers back in and worrying about human agency.

So, they’re shying away from giving First Amendment rights to artificial intelligence, even though corporations have them?

I do think people are getting the heebie-jeebies about artificial intelligence. Because it is a logical progression: You can have speech without a speaker, so why can’t artificial intelligence be a speaker? But I think they’re going call on different sets of precedents and legal tradition to not allow those kinds of applications to nonhuman “speakers.” The tension is going to be heebie-jeebies over artificial intelligence one side of the argument, versus pro-corporate leanings on the other. This fascinating contest of impulses is going to be borne out over the next few years.

And as you’ve tracked it through your research, it’s been kind of a long time coming.

Yes, it has. In corporate cases, they’re very willing to see speech without speakers, but in new-technology cases, they really want there to be a speaker. Much of what we are seeing now was first articulated in relation to radio. Placing speech rather than speakers at the heart of free speech cases was initially a way of applying freedom of speech in a way that did not only endorse the rights of media owners, but also protected a broad social interest in the distribution of information. This has, since been turned on its head, so that it is often a tool to enhance the rights of corporate rather than individual speakers.