Lanier Cover Image
Image generated with assistance from DALL-E.

Artificial Intelligence gets better when you turn it upside down

AI has helped people.  That’s the most important starting point.  For instance, it has made programming less tedious and programmers more productive.  

And yet AI also inspired remarkable fears, with some of the central creators and sellers of AI systems warning that it could cause human extinction.  AI COULD do great harm, through campaigns of deepfakes that undermine society but only if we insist on mystifying AI in a way that paralyzes our ability to work with the technology responsibly.

There is a better way of thinking about AI that turns it on its head, making it both safer and more valuable.  This alternative is sometimes called “Data Dignity”.  The idea is that you can think of AI as a new way for people to collaborate, instead of as a new kind of personage, or entity on the scene.

The two ways of thinking are equivalent from a strict technical point of view, but Data Dignity makes it easier to think about how the technology can best fit into our lives.

Let’s demystify AI and summarize how it works.  (We’re talking here about the GPT-style AI that has become so prevalent.)  You can understand it in three steps.

First, consider how computers can recognize what kind of data is present based on statistics.  For instance, a bunch of statistical measurements applied to a stretch of text might determine whether it was really written by Shakespear or an imposter.  A similar tangle of statistical values might be able to tell if an image is of a dog or a cat.

The bundles of statistical measurements are called neural networks, although they differ from biological neural networks, and they are created by a process called training, where they get tweaked repeatedly until they function.  It’s a messy process, and can seem mysterious, but it would be weird if it didn’t eventually work.  After all, math is real, and enough statistics, once trained, will inevitably do the job. 

The second step is to take in an extremely large amount of data and train a stupendously gigantic conglomerate neural network, called a large language model, to recognize – we say “classify” – all the stuff identified in the data.  For instance, you could take in the whole internet and assume that text found adjacent to an image usually has something to do with the image, and then train the model to classify which image matches which text.

Now you can classify a vast multitude things, and it’s time for the third step.  This is to run the process in reverse.  For instance, let’s say you want a picture of a cat.  You can start with an image of random snow and ask the model to rank it for similarity to an image of a cat.  Then add some more randomness.  If it starts to look a little more like a cat, then keep the modification, otherwise discard.  They do it enough times and you get a cat emerging out of the snow.

The magic, the new trick, the thing that has never been possible before, is that you can combine qualities you want in the output of the model.  That is why this kind of AI is often called “generative”.  You can prompt for a cat using a parachute while playing a mandolin, rendered in watercolor.  And there it is.  In order for a batch of classifiers to be satisfied at once, the process often solves new problems like how a parachute would fit on a cat, or how cats paws would fit on a mandolin.  

While this type of problem solving can seem magical, it’s important to remember that is merely using random stabs, constrained by the large amount of training data indirectly, through the classifiers, to find answers.  

In other words, instead of thinking about this process as a new kind of creature, we can think of it as a new kind of collaboration.  People made the data that the model is trained on, and the model simply finds hidden correspondences in what people did.

This is not a disparagement of AI at all; it is high praise.  Helping people work together better is precisely what civilization, and computer science, are for.  

Once you think of AI as a form of collaboration, it becomes less scary.  It is not here to replace people.  It also becomes clearer how to use it best.  For instance, the increase in productivity for programmers arises precisely because a programmer can now rely on what others have done (over and over) to avoid the most repetitive and tedious aspects of the job.  

A lot of people who work on AI like to think of it as a new creature on the scene, maybe because that has been a part of so many science fiction movies like The Matrix or Terminator.  But if you think of it that way you make it more mysterious than it has to be.

Instead of trying to convince it not to harm us, we have the option of paying attention to what data we train on.  We can develop ways to turn off the influence of data from malicious, incompetent, or useless sources.  

A lot of people who think of AI as a creature want to think of just the right way to convince it not to harm people.  But this is like the oldest stories of genies or devils.  Whatever you ask of a creature might be twisted by the creature.

When we think of AI as being made of people, then it becomes clear that we are the sources of all AI does, and we can take responsibility.

We don’t need to be mystified of our own activity and then terrified of ourselves.


Jaron Lanier is a computer scientist, author, and musician. He is presently “Prime Unifying Scientist” at Microsoft. He was a USC Annenberg School Innovator in Residence 2010-11.