WHEN TECH KNOWS YOU BETTER THAN YOU KNOW YOURSELF- Noah Harari and Tristan Harris interviewed by Wired(Quest saturday special)

WHEN YOU ARE 2 years old, your mother knows more about you than you know yourself. As you get older, you begin to understand things about your mind that even she doesn’t know. But then, says Yuval Noah Harari, another competitor joins the race: “You have this corporation or government running after you, and they are way past your mother, and they are at your back.” Amazon will soon know when you need lightbulbs right before they burn out. YouTube knows how to keep you staring at the screen long past when it’s in your interest to stop. An advertiser in the future might know your sexual preferences before they are clear to you. (And they’ll certainly know them before you’ve told your mother.)
Recently, editor of wired talked with Harari, the author of three best-selling books, and Tristan Harris, who runs the Center for Humane Technology and who has played a substantial role in making “time well spent” perhaps the most-debated phrase in Silicon Valley in 2018. They are two of the smartest people in the world of tech, and each spoke eloquently about self-knowledge and how humans can make themselves harder to hack. As Harari said, “We are now facing not just a technological crisis but a philosophical crisis.” Here is the transcript of the conversation.
(Nicholas Thompson: Tristan, tell me a little bit about what you do and then Yuval, you tell me too.
Tristan Harris: I am a director of the Center for Humane Technology where we focus on realigning technology with a clear-eyed model of human nature. And I was before that a design ethicist at Google where I studied the ethics of human persuasion.
Yuval Noah Harari: I’m a historian and I try to understand where humanity is coming from and where we are heading.
NT: Let’s start by hearing about how you guys met because I know that goes back a while. When did the two of you first meet?
YNH: Funnily enough, on an expedition to Antarctica, we were invited by the Chilean government to the Congress of the Future to talk about the future of humankind and one part of the Congress was an expedition to the Chilean base in Antarctica to see global warming with our own eyes. It was still very cold and there were so many interesting people on this expedition
TH: A lot of philosophers and Nobel laureates. And I think we particularly connected with Michael Sandel, who is a really amazing philosopher of moral philosophy.
NT: It’s almost like a reality show. I would have loved to see the whole thing. You write about different things, you talk about different things but there are a lot of similarities. And one of the key themes is the notion that our minds don’t work the way that we sometimes think they do. We don’t have as much agency over our minds as perhaps we believed until now. Tristan, why don’t you start talking about that and then Yuval jump in, and we’ll go from there.
TH: Yeah I actually learned a lot of this from one of Yuval’s early talks where he talks about democracy as “Where should we put authority in a society?” And where we should put it in the opinions and feelings of people.
But my whole background: I actually spent the last 10 years studying persuasion, starting when I was a magician as a kid, where you learn that there are things that work on all human minds. It doesn’t matter whether they have a PhD, whether they’re a nuclear physicist, what age they are. It’s not like, Oh, if you speak Japanese I can’t do this trick on you, it’s not going to work. It works on everybody. So somehow there’s this discipline which is about universal exploits on all human minds. And then I was at the Persuasive Technology Lab at Stanford that teaches engineering students how you apply the principles of persuasion to technology. Could technology be hacking human feelings, attitudes, beliefs, behaviors to keep people engaged with products? And I think that’s the thing that we both share is that the human mind is not the total secure enclave root of authority that we think it is, and if we want to treat it that way we’re going to have to understand what needs to be protected first.
YNH: I think that we are now facing really, not just a technological crisis, but a philosophical crisis. Because we have built our society, certainly liberal democracy with elections and the free market and so forth, on philosophical ideas from the 18th century which are simply incompatible not just with the scientific findings of the 21st century but above all with the technology we now have at our disposal. Our society is built on the ideas that the voter knows best, that the customer is always right, that ultimate authority is, as Tristan said, is with the feelings of human beings and this assumes that human feelings and human choices are these sacred arena which cannot be hacked, which cannot be manipulated. Ultimately, my choices, my desires reflect my free will and nobody can access that or touch that. And this was never true. But we didn’t pay a very high cost for believing in this myth in the 19th and 20th century because nobody had a technology to actually do it. Now, people—some people—corporations, governments are gaming the technology to hack human beings. Maybe the most important fact about living in the 21st century is that we are now hackable animals.
Hacking a Human
NT: Explain what it means to hack a human being and why what can be done now is different from what could be done 100 years ago.
YNH: To hack a human being is to understand what’s happening inside you on the level of the body, of the brain, of the mind, so that you can predict what people will do. You can understand how they feel and you can, of course, once you understand and predict, you can usually also manipulate and control and even replace. And of course it can’t be done perfectly and it was possible to do it to some extent also a century ago. But the difference in the level is significant. I would say that the real key is whether somebody can understand you better than you understand yourself. The algorithms that are trying to hack us, they will never be perfect. There is no such thing as understanding perfectly everything or predicting everything. You don’t need perfect, you just need to be better than the average human being.
NT: And are we there now? Or are you worried that we’re about to get there?
YNH: I think Tristan might be able to answer where we are right now better than me, but I guess that if we are not there now, we are approaching very very fast.
TH: I think a good example of this is YouTube. You open up that YouTube video your friend sends you after your lunch break. You come back to your computer and you think OK, I know those other times I end up watching two or three videos and I end up getting sucked into it, but this time it’s going to be really different. I’m just going to watch this one video and then somehow, that’s not what happens. You wake up from a trance three hours later and you say, “What the hell just happened?” And it’s because you didn’t realize you had a supercomputer pointed at your brain. So when you open up that video you’re activating Google’s billions of dollars of computing power and they’ve looked at what has ever gotten 2 billion human animals to click on another video. And it knows way more about what’s going to be the perfect chess move to play against your mind. If you think of your mind as a chessboard, and you think you know the perfect move to play—I’ll just watch this one video. But you can only see so many moves ahead on the chessboard. But the computer sees your mind and it says, “No, no, no. I’ve played a billion simulations of this chess game before on these other human animals watching YouTube,” and it’s going to win. Think about when Garry Kasparov loses against Deep Blue. Garry Kasparov can see so many moves ahead on the chessboard. But he can’t see beyond a certain point like a mouse can see so many moves ahead in a maze, but a human can see way more moves ahead and then Garry can see even more moves ahead. But when Garry loses against IBM Deep Blue, that’s checkmate against humanity for all time because he was the best human chess player. So it’s not that we’re completely losing human agency and you walk in to YouTube and it always addicts you for the rest of your life and you never leave the screen. But everywhere you turn on the internet there’s basically a supercomputer pointing at your brain, playing chess against your mind, and it’s going to win a lot more often than not.
“Everywhere you turn on the internet there’s basically a supercomputer pointing at your brain, playing chess against your mind, and it’s going to win a lot more often than not.”
TRISTAN HARRIS
NT: Let’s talk about that metaphor because chess is a game with a winner and a loser. But YouTube is also going to—I hope, please, Gods of YouTube—recommend this particular video to people, which I hope will be elucidating and illuminating. So is chess really the right metaphor? A game with a winner and a loser.
TH: Well the question is, What really is the game that’s being played? So if the game being played was, Hey Nick, go meditate in a room for two hours and then come back to me and tell me what you really want right now in your life. And if YouTube is using 2 billion human animals to calculate based on everybody who’s ever wanted to learn how to play ukulele it can say, “Here’s the perfect video I have to teach later that can be great.” The problem is it doesn’t actually care about what you want, it just cares about what will keep you next on the screen. The thing that works best at keeping a teenage girl watching a dieting video on YouTube the longest is to say here’s an anorexia video. If you airdrop a person on a video about the news of 9/11, just a fact-based news video, the video that plays next is the Alex Jones InfoWars video.
NT: So what happens to this conversation?
TH: Yeah, I guess it’s really going to depend! The other problem is that you can also kind of hack these things, and so there are governments who actually can manipulate the way the recommendation system works. And so Yuval said like these systems are kind of out of control and algorithms are kind of running where 2 billion people spend their time. Seventy percent of what people watch on YouTube is driven by recommendations from the algorithm. People think that what you’re watching on YouTube is a choice. People are sitting there, they sit there, they think, and then they choose. But that’s not true. Seventy percent of what people are watching is the recommended videos on the right hand side, which means 70 percent of 1.9 billion users, that’s more than the number of followers of Islam, about the number followers of Christianity, of what they’re looking at on YouTube for 60 minutes a day—that’s the average time people spend on YouTube. So you got 60 minutes, and 70 percent is populated by a computer. The machine is out of control. Because if you thought 9/11 conspiracy theories were bad in English, try 9/11 conspiracies in Burmese and Sri Lanka and in Arabic. It’s kind of a digital Frankenstein that’s pulling on all these levers and steering people in all these different directions.
NT: And, Yuval, we got into this point by you saying that this scares you for democracy. And it makes you worry whether democracy can survive, or I believe you say the phrase you use in your book is: Democracy will become a puppet show. Explain that.
YNH: Yeah, I mean, if it doesn’t adapt to these new realities, it will become just an emotional puppet show. If you go on with this illusion that human choice cannot be hacked, cannot be manipulated, and we can just trust it completely, and this is the source of all authority, then very soon you end up with an emotional puppet show.
And this is one of the greatest dangers that we are facing and it really is the result of a kind of philosophical impoverishment of just taking for granted philosophical ideas from the 18th century and not updating them with the findings of science. And it’s very difficult because you go to people—people don’t want to hear this message that they are hackable animals, that their choices, their desires, their understanding of who am I, what are my most authentic aspirations, these can actually be hacked and manipulated. To put it briefly, my amygdala may be working for Putin. I don’t want to know this. I don’t want to believe that. No, I’m a free agent. If I’m afraid of something, this is because of me. Not because somebody planted this fear in my mind. If I choose something, this is my free will, And who are you to tell me anything else?
NT: Well I’m hoping that Putin will soon be working for my amygdala, but that’s a side project I have going. But it seems inevitable, from what you wrote in your first book, that we would reach this point, where human minds would be hackable and where computers and machines and AI would have better understandings of us. But it’s certainly not inevitable that it would lead us to negative outcomes—to 9/11 conspiracy theories and a broken democracy. So have we reached the point of no return? How do we avoid the point of no return if we haven’t reached there? And what are the key decision points along the way?
YNH: Well nothing is inevitable in that. I mean the technology itself is going to develop. You can’t just stop all research in AI and you can’t stop all research in biotech. And the two go together. I think that AI gets too much attention now, and we should put equal emphasis on what’s happening on the biotech front because in order to hack human beings, you need biology and some of the most important tools and insights, they are not coming from computer science, they are coming from brain science. And many of the people who design all these amazing algorithms, they have a background in psychology and brain science because this is what you’re trying to hack. But what should we realize? We can use the technology in many different ways. I mean for example we now are using AI mainly in order to surveil individuals in the service of corporations and governments. But it can be flipped to the opposite direction. We can use the same surveillance systems to control the government in the service of individuals, to monitor, for example, government officials that they are not corrupt. The technology is willing to do that. The question is whether we’re willing to develop the necessary tools to do it.
“To put it briefly, my amygdala may be working for Putin.”
YUVAL NOAH HARARI
TH: I Think one of Yuval’s major points here is that the biotech lets you understand by hooking up a sensor to someone features about that person that they won’t know about themselves, and they’re increasingly reverse-engineering the human animal. One of the interesting things that I’ve been following is also the ways you can ascertain those signals without an invasive sensor. And we were talking about this a second ago. There’s something called Eulerian Video magnification where you point a computer camera at a person’s face. Then if I put a supercomputer behind the camera, I can actually run a mathematical equation, and I can find the micro pulses of blood to your face that I as a human can’t see but that the computer can see, so I can pick up your heart rate. What does that let me do? I can pick up your stress level because heart rate variability gives me your stress level. I can point—there’s a woman named Poppy Crum who gave a TED talk this year about the end of the poker face, that we had this idea that there can be a poker face, we can actually hide our emotions from other people. But this talk is about the erosion of that, that we can point a camera at your eyes and see when your eyes dilate, which actually detects cognitive strains—when you’re having a hard time understanding something or an easy time understanding something. We can continually adjust this based on your heart rate, your eye dilation. You know, one of the things with Cambridge Analytica is the idea—you know, which is all about the hacking of Brexit and Russia and all the other US elections—that was based on, if I know your big five personality traits, if I know Nick Thompson’s personality through his openness, conscientiousness, extrovertedness, agreeableness, and neuroticism, that gives me your personality. And based on your personality, I can tune a political message to be perfect for you. Now the whole scandal there was that Facebook let go of this data to be stolen by a researcher who used to have people fill in questionnaires to figure out what are Nick’s big five personality traits? But now there’s a woman named Gloria Mark at UC Irvine who has done research showing you can actually get people’s big five personality traits just by their click patterns alone, with 80 percent accuracy. So again, the end of the poker face, the end of the hidden parts of your personality. We’re going to be able to point AIs at human animals and figure out more and more signals from them including their micro expressions, when you smirk and all these things, we’ve got face ID cameras on all of these phones. So now if you have a tight loop where I can adjust the political messages in real time to your heart rate and to your eye dilation and to your political personality. That’s not a world that you want to live in. It’s a kind of dystopia.
YNH: In many contexts, you can use that. It can be used in class to figure if one of the students is not getting the message, if the student is bored, which could be a very good thing. It could be used by lawyers, like you negotiate a deal and if I can read what’s behind your poker face, and you can’t, that’s a tremendous advantage for me. So it can be done in a diplomatic setting, like two prime ministers are meeting to resolve the Israeli-Palestinian conflict, and one of them has an ear bug and a computer is whispering in his ear what is the true emotional state. What’s happening in the brain in the mind of the person on the other side of the table. And what happens when the two sides have this? And you have kind of an arms race. And we just have absolutely no idea how to handle these things. I gave a personal example when I talked about this in Davos. So I talked about my entire approach to this to these issues is shaped by my experience of coming out. That I realized that I was gay when I was 21, and ever since then I was haunted by this thought. What was I doing for the previous five or six years? I mean, how is it possible? I’m not talking about something small that you don’t know about yourself—everybody has something you don’t know about yourself. But how can you possibly not know this about yourself? And then the next thought is a computer and an algorithm could have told me that when I was 14 so easily just by something as simple as following the focus of my eyes. Like, I don’t know, I walk on the beach or even watch television, and there is—what was in the 1980s, Baywatch or something—and there is a guy in a swimsuit and there is a girl in a swimsuit and which way my eyes are going. It’s as simple as that. And then I think, What would my life have been like, first, if I knew when I was 14? Secondly, if I got this information from an algorithm? I mean, if there is something incredibly, like, deflating for the ego, that this is the source of this wisdom about myself, an algorithm that followed my movements?
Coke Versus Pepsi
NT: And there’s an even creepier element, which you write about in your book: What if Coca-Cola had figured it out first and was selling you Coke with shirtless men, when you didn’t even know you were gay?
YNH: Right, exactly! Coca-Cola versus Pepsi: Coca-Cola knows this about me and shows me a commercial with a shirtless man; Pepsi doesn’t know this about me because they are not using these sophisticated algorithms. They go with the normal commercials with the girl in the bikini. And naturally enough, I buy Coca-Cola, and I don’t even know why. Next morning when I go to the supermarket I buy Coca-Cola, and I think this is my free choice. I chose Coke. But no, I was hacked.
NT: And so this is inevitable.
TH: This is the whole issue. This is everything that we’re talking about. And how do you trust something that can pull these signals off of you? If a system is asymmetric—if you know more about me than I know about myself, we usually have a name for that in law. So, for example, when you deal with a lawyer, you hand over your very personal details to a lawyer so they can help you. But then they have this knowledge of the law and they know about your vulnerable information, so they could exploit you with that. Imagine a lawyer who took all of that personal information and sold it to somebody else. But they’re governed by a different relationship, which is the fiduciary relationship. They can lose their license if they don’t actually serve your interest. And similarly a doctor or a psychotherapist. They also have it. So there’s this big question of how do we hand over information about us, and say, “I want you to use that to help me.” So on whose authority can I guarantee that you’re going to help me?
YNH: With the lawyer, there is this formal setting. OK, I hire you to be my lawyer, this is my information. And we know this. But I’m just walking down the street, there is a camera looking at me. I don’t even know it’s happening.
TH: That’s the most duplicitous part. If you want to know what Facebook is, imagine a priest in a confession booth, and they listened to 2 billion people’s confessions. But they also watch you around your whole day, what you click on, which ads of Coca-Cola or Pepsi, or the shirtless man and the shirtless women, and all your conversations that you have with everybody else in your life—because they have Facebook Messenger, they have that data too—but imagine that this priest in a confession booth, their entire business model, is to sell access to the confession booth to another party. So someone else can manipulate you. Because that’s the only way that this priest makes money in this case. So they don’t make money any other way.
NT: There’s large corporations that will have this data, you mentioned Facebook, and there will be governments. Which do you worry about more?
YNH: It’s the same. I mean, once you reach beyond a certain point, it doesn’t matter how you call it. This is the entity that actually rules, whoever has this kind of data. I mean, even if in a setting where you still have a formal government, and this data is in the hands of some corporation, then the corporation if it wants can decide who wins the next elections. So it’s not really that much of a choice. I mean there is choice. We can design a different political and economic system in order to prevent this immense concentration of data and power in the hands of either governments or corporations that use it without being accountable and without being transparent about what they are doing. I mean the message is not OK. It’s over. Humankind is in the dustbin of history.
NT: That’s not the message.
YNH: No that’s not the message.
NT: Phew. Eyes have stopped dilating, let’s keep this going.
YNH: The real question is we need to get people to understand this is real. This is happening. There are things we can do. And like, you know, you have the midterm elections in a couple of months. So in every debate, every time a candidate goes to meet the potential voters, in person or on television, ask them this question: What is your plan? What is your take on this issue? What are you going to do if we are going to elect you? If they say “I don’t know what you’re talking about,” that’s a big problem.
TH: I think the problem is most of them have no idea what we’re talking about. And that’s one of the issues is I think policymakers, as we’ve seen, are not very educated on these issues.
NT: They’re doing better. They’re doing so much better this year than last year. Watching the Senate hearings, the last hearings with Jack Dorsey and Sheryl Sandberg, versus watching the Zuckerberg hearings or watching the Colin Stretch hearings, there’s been improvement.
TH: It’s true. There’s much more, though. I think these issues just open up a whole space of possibility. We don’t even know yet the kinds of things we’re going to be able to predict. Like we’ve mentioned a few examples that we know about. But if you have a secret way of knowing something about a person by pointing a camera at them and AI, why would you publish that? So there’s lots of things that can be known about us that to manipulate us right now that we don’t even know about. And how do we start to regulate that? I think that the relationship we want to govern is when a supercomputer is pointed at you, that relationship needs to be protected and governed by a set of laws.
User, Protect Thyself
NT: And so there are three elements in that relationship. There is the supercomputer: What does it do? What does it not do? There’s the dynamic of how it’s pointed. What are the rules over what it can collect, what are the rules for what it can’t collect and what it can store? And there’s you. How do you train yourself to act? How do you train yourself to have self-awareness? So let’s talk about all three of those areas maybe starting with the person. What should the person do in the future to survive better in this dynamic?
TH: One thing I would say about that is, I think self-awareness is important. It’s important that people know the thing we’re talking about and they realize that we can be hacked. But it’s not a solution. You have millions of years of evolution that guide your mind to make certain judgments and conclusions. A good example of this is if I put on a VR helmet, and now suddenly I’m in a space where there’s a ledge, I’m at the edge of a cliff, I consciously know I’m sitting here in a room with Yuval and Nick, I know that consciously. So I’ve let the self-awareness—I know I’m being manipulated. But if you push me, I’m going to not want to fall, right? Because I have millions of years of evolution that tell me you’re pushing me off of a ledge. So in the same way you can say—Dan Ariely makes this joke actually, a behavioral economist—that flattery works on us even if I tell you I’m making it up. It’s like, Nick I love your jacket right now. I feel it’s a great jacket on you. It’s a really amazing jacket.
NT: I actually picked it out because I knew from studying your carbon dioxide exhalation yesterday…
TH: Exactly, we’re manipulating each other now…
The point is that even if you know that I’m just making that up, it still actually feels good. The flattery feels good. And so it’s important we have to sort of think of this as like a new era, a kind of a new enlightenment where we have to see ourselves in a very different way and that doesn’t mean that that’s the whole answer. It’s just the first step we have to all walk around—