Technology is challenging our ideas about fact and fiction. We will need an emotionally intelligent response
Monthly column: “Possible Futures”
Época Negócios Brazil, June 2022
(originally published in Portuguese)
or the past couple of weeks, I have been training an artificial intelligence (AI) algorithm to imitate my voice. The results are pretty impressive. I can now type almost anything into my computer and have “my” voice say it. It’s not perfect – sometimes the intonation is a bit off – but people I have played it to agree that it sounds a lot like me. Certainly it would be convincing if it was used to leave a voicemail or WhatsApp message, especially if there was some distortion or noise in the background.
But the experience has also been unsettling, because, in the wrong hands, the programme could easily be used to make “me” say things that are the opposite of what I think or believe. And it started me thinking about what we mean when we say something is “real” or “authentic” and how new technology we are inventing is forcing us to rethink those ideas.
Being authentic seems to be something different from simply telling the truth. Truth and falsehood are things that we can usually check against the facts – although, of course, many politicians around the world are trying to cast doubt on that idea. Authenticity seems instead to be a way of being that we can see in others and others can detect in us. Often, it’s just a feeling we have about someone: the sense that what they say and do, and how they live their life, is a true reflection of what they believe deep inside. To use a cliché from American management-speak, they are the kind of people who “walk the talk”.
The psychologist Carl Rogers called this attribute “congruence” and he felt it was a core requirement of good human interaction. And I think he was right. People who show it are good company. When they compliment us, it feels genuine and when they suggest we look at something in a different way, we pay attention.
By contrast, inauthenticity can make us shudder. Like a duff note in an orchestra, it is immediately noticeable: the boss who pretends they care that it is the intern’s birthday, while spending most of the celebration drinks emailing on their phone; the colleague who talks about “teamwork” when you know they are angling for promotion over you; the new arrival in the company who gushes with approval at whatever the CEO says. These are attitudes that drip with fakeness and they make us trust and cooperate less with people who display them.
Which brings us back to AI. Despite exciting fears about sentient technology taking over the world, the reality of the future of AI is that it will become a very powerful partner with us humans. It will help us get things done and provide insights we might not have had ourselves – very much like a good colleague or friend.
In fact, we already interact with AI in this way. Unlike conventional computer programmes, AI algorithms seem to have a mind – a personality, even – of their own. I always like it, for example, when my new Discover Weekly playlist lands on Spotify on a Monday. It feels like some knowledgable record-shop owner has thought carefully about the kind of tracks I might like to get me through the week. I can even picture them rummaging through racks of dusty vinyl. Many of us rather like the voice we have chosen to give us GPS directions in the car and have even given it a name. And, although the technology is still glitchy, we can have sometimes even fairly complex conversations with a bot via a company website’s chatbox.
The future, by which I mean now and the next few years, will be about learning how to accommodate these new AI partners in our work and non-work lives. We will be having conversations with them, giving and taking instructions and sharing ideas, just like we do with human colleagues. And, as with them, we will need to know if they are being authentic or not.
Rational thinking won’t help here. I know that my Spotify record-store owner actually has no idea what a record is – or Spotify, or music, even. “They” are just very good at processing thousands of numeric datapoints that happen to throw light on my taste in music. But I still picture them as a person. My AI-generated voice is a little stilted to be totally convincing, but that’s just an engineering problem. It will eventually sound exactly like me. And, when that happens, knowing that what they are hearing is simply a series of soundwaves produced by lines of computer code won’t stop at least some people believing deep down that it was really me saying those words.
Instead, I think we need a new take on authenticity, seeing it as less to do with “truthfulness” and “reality”, which in a world of fake AI voices is going to be harder and harder to ascertain, and as more about the way we behave. If we can develop an attitude that puts high value on, for example, politely saying what we think, politely listening to others and judging people less on the content of what they say and more on their intent and how genuine they seem to be, we will have developed some important emotional skills. And these are the skills that will not only help us get on better with each other, but also give us a better sense of when we should be taking the output of AI algorithms at face value and when we should be asking if we need to filter what they are saying through the lens of our wiser – and infinitely more perceptive – understanding of the world.