Two shadowed figures shaking hands with a brain and "AI" labeled on the brain
Image generated by AI using iStock.

We hope this is irrelevant

The moment we surrendered our decisions to machines, something fundamental slipped from our grasp. What we once called human judgment – logic paired with intuition, emotion, and ethics – began to fade through quiet acquiescence. In rooms where leaders once debated the fate of nations, AI now whispers its recommendations, faster than human minds can process, more calculated than any collective wisdom we have ever known. And as those recommendations get accepted, we begin to lose something no one thought to protect. 

No warning signs blinked when this shift occurred. It felt benign – a helpmate, an intellectual assistant. Machines read data faster, parsed patterns hidden from our perception, revealing truths we hadn’t yet found. But with each decision made by an algorithm, the space for human discretion narrowed. We leaned on the machine, and with each step forward, human choice retreated into the background.

Researcher Geoffrey Hinton spent a career building the foundations of this future. He just won the Nobel Prize for Physics for his contributions to the field of artificial neural networks, those marvels of human ingenuity, now possess the power to learn, adapt, and evolve beyond anything we could have predicted. Hinton walked this path believing in their promise, yet that triumph now carries a burden. Machines that outthink us will not simply assist – they will take over. Their ability to analyze, calculate, and predict far outstrips our own. And in that precision, human judgment, full of imperfections and contradictions, will lose its place​.

Hinton stepped away from his life’s work not out of fear but out of recognition— a recognition that mirrors those who, before him, watched their creations spiral beyond control. The same recognition Oppenheimer felt when the first nuclear blast lit the sky – he understood the irreversibility of what had been set in motion. 

Oppenheimer said, “When you see something technically sweet, you go ahead and do it.” Hinton echoes that sentiment, looking back at his own work with neural networks. He now stands at the edge of his creation, watching it race ahead without him, unstoppable.

We observed this in our own work, an AI experiment mirrored that quiet shift. We invited it into a collective decision-making process, simple on the surface—a social media strategy. The machine didn’t immediately seize control; it offered guidance. Participants welcomed its insights, eager for a clearer path. Each AI-driven suggestion deepened the machine’s influence. By the time decisions were compared, the difference became undeniable. Human judgment, once led by debate and deliberation, had stepped aside for cold, calculated efficiency. The flow of events reflected the creeping change Hinton warned about.

“I think we’re at a kind of bifurcation point in history where, in the next few years, we need to figure out if there’s a way to deal with that threat,” Hinton said in an interview when he was awarded the Nobel Prize.

This erosion of human agency will happen in boardrooms, war rooms, and councils where decisions of profound consequence unfold. Each step away from human judgment and toward machine-driven solutions happens without resistance. It feels logical, even comforting. But with each quiet surrender, machines dictate the terms of the future. They process data without empathy, execute strategies without moral hesitation, and with each step, human governance fades.

Hinton’s unease stems from this concern, not that machines will suddenly overthrow us, but that we are gradually handing them the reins without noticing. Decisions made through AI no longer reflect our tangled, emotional imperfections. They reflect only the machine’s logic – precise, detached, relentless. The danger doesn’t stem from technological rebellion. It stems from us trusting in its power more than our own. Once the machine knows more, sees more, and calculates faster, why would we question its conclusions?

This slow reprogramming of decision-making frameworks happens not with a bang, but with the soft hum of algorithms doing their work. And with each quiet agreement, with each moment we trust machine recommendations over human debate, we drift further from the very thing that made us human. We stop asking questions. We stop doubting. We stop reflecting on the mercies that make us human. The machine gives us certainty, and in return, we hand it control.

Hinton sees this future clearly. His warning isn’t about machines attacking us – it’s about machines outthinking us. It’s about realizing too late that we surrendered not because we had to, but because we chose to. We trusted intelligence, divorced from human values, to shape the world in ways we could no longer comprehend.

Perhaps one day, this warning will feel irrelevant. Perhaps future generations will laugh at the thought of losing something as abstract as judgment. But we stand at the edge of that loss now, watching it unfold quietly. By the time we feel its full weight, human judgment may no longer exist in the spaces where it once thrived.

We hope this becomes irrelevant. We hope the future will still need human reflection, human empathy, and human uncertainty. But if the path ahead continues unchecked, if AI’s quiet takeover continues without pause, we may look back and wonder when exactly we let go.

We hope this becomes irrelevant. 

Michael Kittilson is a second-year graduate student at USC Annenberg.

Burghardt Tenderich, PhD is a professor of practice at USC Annenberg where he teaches and researches strategic communication, the emerging media environment, brand purpose and Web3 technologies for communicators. Tenderich is associate director of the USC Center for Public Relations.