BONUS Episode “Scary Smart” Artificial Intelligence with Mo Gawdat

27th Sep 2021

You might have noticed over the last few episodes that I’ve been keen to discuss subjects slightly leftfield of nutrition and what I’ve traditionally talked about, but fascinating nonetheless.

Listen now on your favourite platform:

And I hope you as a listener, who’s time and attention I value so greatly, will trust me as I take you on a bit  of a ride. Because ultimately, I hope you agree that the topics I share are always very important.

Mo Gawdat, who you may remember from episode #91 Solving Happiness (linked here)  is a person who I cherish and with whom I had a very impactful conversation with, on a personal level. He was the former Chief Business Officer of Google [X], which is Google’s ‘moonshot factory’, author of the international bestselling book ‘Solve for Happy’ and founder of ‘One Billion Happy’. After a long career in tech, Mo made happiness his primary topic of research, diving deeply into literature and conversing on the topic with some of the wisest people in the world on “Slo Mo: A Podcast with Mo Gawdat”.

Mo is an exquisite writer and speaker with deep expertise of technology as well as a passionate appreciation for the importance of human connection and happiness. He possesses a set of overlapping skills and a breadth of knowledge in the fields of both human psychology and tech which is a rarity. His latest piece of work, a book called “Scary Smart” is a timely prophecy and call to action that puts each of us at the center of designing the future of humanity. I know that sounds intense right? But it’s very true.

During his time at Google [X], he worked on the world’s most futuristic technologies, including Artificial Intelligence.  During  the  pod he recalls a story of when the penny dropped for him, just a few years ago, and felt compelled to leave his job. And now,  having contributed to AI’s development, he feels a sense of duty to inform the public on the implications of this controversial technology and how we navigate the scary and inevitable intrusion of AI as well as who really is in control. Us.

Today we discuss:

  • Pandemic of AI and why the handing COVID is a lesson to learn from
  • The difference between collective intelligence, artificial intelligence and super intelligence or Artificial general intelligence 
  • How machines started creating and coding other machines 
  • The 3 inevitable outcomes - including the fact that AI is here and they will outsmart us
  • Machines will become emotional sentient beings with a Superconsciousness 

To understand this episode you have to submit yourself to accepting that what we are creating is essentially another lifeform. Albeit non-biological, it will have human-like attributes in the way they learn as well as a moral value system which could immeasurably improve the human race as we know it. But our  destiny lies in how we treat and nurture them as our own. Literally like infants with (as strange as it is to  say it) love, compassion, connection and respect.

Do be sure to check out Mo’s new book - Scary Smart which is out on the 30th September - and can be pre-ordered using this link

Episode guests

Mo Gawdat

Mo Gawdat is the former Chief Business Officer of Google [X]; host of the popular podcast, Slo Mo: A Podcast with Mo Gawdat; author of the international bestselling books Solve for HappyScary Smart; and That Little Voice in Your Head; founder of One Billion Happy; and Chief AI Officer of Flight Story.

After a 30 year career in tech and serving as Chief Business Officer at Google [X], Google's 'moonshot factory' of innovation, Mo has made happiness his primary topic of research, diving deeply into literature and conversing on the topic with some of the wisest people in the world.

In 2014, motivated by the tragic loss of his son, Ali, Mo began pouring his findings into his international bestselling book, Solve for Happy: Engineer Your Path to Joy. His mission to help one billion people become happier, #OneBillionHappy, is his moonshot attempt to honor Ali by spreading the message that happiness can be learned and shared to one billion people.

In 2020, Mo launched his chart-topping podcast, Slo Mo: A Podcast with Mo Gawdat, a weekly series of extraordinary interviews that explores the profound questions and obstacles we all face in the pursuit of purpose and happiness in our lives.

In 2021, Mo published Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World, a roadmap detailing how humanity can ensure a symbiotic coexistence with AI when it inevitably becomes a billion times smarter than we are. Since the release of ChatGPT in 2023, Mo has been recognized for his early whistleblowing on AI's unregulated development and has become one of the most globally consulted experts on the topic.

In 2022, Mo published That Little Voice in Your Head: Adjust the Code That Runs Your Brain, a comprehensive user manual for using the human brain optimally to thrive and avoid suffering.

​In 2023, Mo co-founded Unstressable, an online course and community for reducing and eliminating stress. It will be accompanied by a book of the same name releasing early 2024.

Unlock your health
  • Access over 1000 research backed recipes
  • Personalise food for your unique health needs
Start your no commitment, free trial now
Tell me more

Related podcasts

Podcast transcript

Dr Rupy: Isn't it ironic that the essence of what makes us human, happiness, compassion and love, is going to save humanity in the age of the rise of the machines.

Mo Gawdat: Okay? And that's the truth.

Dr Rupy: Welcome to the Doctor's Kitchen podcast. The show about food, lifestyle, medicine and how to improve your health today. I'm Dr Rupy, your host. I'm a medical doctor, I study nutrition and I'm a firm believer in the power of food and lifestyle as medicine. Join me and my expert guests where we discuss the multiple determinants of what allows you to lead your best life.

So you might have noticed over the last few episodes that I've been keen to discuss subjects that are slightly left field of nutrition and what I've traditionally talked about, but fascinating nonetheless. And I hope you as a listener whose time and attention I value so, so greatly, will trust me as I take you on a bit of a ride, because ultimately, and I hope you agree, that the topics I share on this podcast are also very important. And with that in mind, I've got Mo Gawdat, who you may remember from episode 91, Solving Happiness, back on the podcast. He is a person who I cherish and with whom I had a very, very impactful conversation with on a personal level. He, just as a reminder, was a former chief business officer of Google X, which is Google's moonshot factory. He's also also author of the international bestselling book Solve for Happy and founder of One Billion Happy. After a long career in tech, Mo made happiness his primary topic of research, partly inspired by his own experiences of depression, but also triggered by his the tragic loss of his son Ali during an operative procedure. And he dives deep into the literature, conversing on the topic with some of the wisest people in the world on Slow Mo, a podcast with Mo Gawdat, highly, highly recommend that. He is an exquisite writer and a speaker with a deep expertise of technology, as well as a passionate appreciation for the importance of human connection and happiness. And it's this set of overlapping skills and breadth of knowledge in the fields of both human psychology and tech, which is a complete rarity, and in my mind puts him on a pedestal amongst above everyone else talking about this subject. And his latest piece of work, I think really made me sit up and listen, because I've kind of known about AI and we've talked about it on the podcast a couple of times, but I don't think I've had as much of an appreciation as I did until I actually read his book from cover to cover. And it's a short book, it's not a long book, but it is, it is scary and it is impactful. It's called Scary Smart, and it's a timely prophecy more than just a book. It's a call to action that puts each of us at the centre of designing the future of humanity. Wow, that sounds super intense, right? It's, but it is very, very true. So, he told a story. So during his time at Google X, he worked on the world's most futuristic technologies, including artificial intelligence. And during the pod that you're going to hear in a second, he recalls a story of of when the penny dropped for him. It was only just a few years ago, and he felt compelled to leave his job. And now having contributed to the development of artificial intelligence, he now has the sense of duty to inform the public about the implications of this controversial technology and how we navigate the scary and inevitable intrusion of artificial intelligence, as well as who is really in control. And that's actually us. So the book, Scary Smart, it does blow the whistle on much of what is unsaid about AI to the general public. For example, the fact that we don't know, no coder knows how the machines actually learn, which is mind boggling to me. And also, we have a general arrogance about how we can easily just press the stop button if the machines get out of control for whatever reason. That's not true. And this is something I thought of as well, but that's not true. That's just not the reality of dealing with a super intelligent machine. Primarily, what you would think about a typical book looking at technology and and and how we develop computer systems, it actually discusses the evolution of human ethics in light of the inevitable rise of machines and how we actually navigate that that unknown world. Um, and in true Mo Gawdat style, he has a beautiful and delicate way of describing this to you, the reader and through his talks as well. So Scary Smart is a reminder of what it means to be human and why we actually need to get back to that humanistic side of ourselves rather than just a fear mongering book. And yes, it's scary, but it also outlines opportunity. So today, we're going to talk about the pandemic of AI and how the handling of COVID is actually a lesson to learn from. The differences between collective intelligence, artificial intelligence and super intelligence or also also known as artificial general intelligence. We're going to learn a bit about how machines actually started creating and coding other machines. And then also the three inevitable outcomes, including the fact that AI is already here and how they will outsmart us very, very quickly. And you might think this is in 20 years time, 50 years time. This is in the next eight years. This is the prediction that everyone is sort of aware of. And then also the concept of singularity, which is something that we touch on, and and how machines will become emotional sentient beings with a super consciousness. Just bear with me. But to understand this episode, you kind of have to submit yourself to accepting that what we are creating is essentially another life form. Albeit non-biological, it will have human-like attributes in the way they learn, as well as a moral value system which could immeasurably improve the human race as we know it, but our destiny actually lies in how we treat and nurture them as our own. So literally like infants, as strange as it is to say it, but really how we express love, compassion, connection and respect to machines that will essentially become sentient beings is how we dig ourselves out of a dystopian future. I'm going to leave it like that. Do check out the book and I'll be releasing a series of recommendations for movies and books to read on futurism and more in the newsletter this week. So make sure you sign up at the doctorskitchen.com. Uh, and I I think you're going to find this episode mind boggling and really scary and hopefully enjoyable. On to the pod.

Honestly, and I'm not just saying this because you're on the pod now and I see you as a friend and I respect your writing, but it is perhaps one of the most important books that I think people should read of our time because of the imminent nature of the intelligence that we're going to have to be dealing with.

Mo Gawdat: Oh, wow.

Dr Rupy: Um, and I think it's going to receive, in my honest opinion, it's going to receive criticism from people who don't know that much about AI as well as people who know a lot about AI as well. And I think you've already laid that out in the book.

Mo Gawdat: Yeah.

Dr Rupy: But the authenticity and the bravery you've taken to write this, I think is fantastic.

Mo Gawdat: Well, thank you.

Dr Rupy: I don't, oh, honestly, Mo, I'm not just saying that. However, I just don't know where to start with this conversation. So, I thought about this as I was reading it, and I thought a good place to start, rather than going through, which we will do eventually, going through what is intelligence, what is artificial intelligence, what is the history of everything. I thought maybe a good place to start would be your story about the yellow ball.

Mo Gawdat: Oh.

Dr Rupy: And how that experience sort of led you on your path to where you are today.

Mo Gawdat: That's probably, Rupy, the best place to start, believe it or not, because this is where I started. I mean, in in my personal career, I have built a ton of technology that you deal with every day. And you know, of course, it's not alone with amazing teams, but the impact on, you know, your experience with technology every day, I think I have left a mark somehow. And I have been the person that is completely committed to tech. I mean, my personal experience when I when I joined Google at the beginning, my task was to expand into the next four billion users. So I I I sort of started operations in almost half of Google operations worldwide, more than 100 languages and to start an operation for Google is not about hiring two sales people. It literally is about kickstarting the internet economy, right? And so imagine what it means to the people of Bangladesh when when actually Google is fully established that, you know, the the engines understands the language, there is local content, there is e-commerce possibilities, there are jobs created and so on and so forth. So it's it's really probably one of the biggest privileges I have ever had in my own life to be able to be part of this. And I I was so committed to technology because technology enabled me to do that, right? And and then I moved to Google X and at Google X, our mission was openly to solve problems that affect the lives of a billion or more people. So what an amazing mission, right? Until that day of the yellow ball. And and and and it wasn't the first time I was exposed to artificial intelligence in a way that should have woken me up. I I think I the first exposure was 2009. But around 2016, we had a farm of grippers, you know, for for for listeners, basically the arms that that, you know, robotic arms that can grip things. And for ages, we programmed those using clear instructions. Go down this at this point, down to the millimeter of accuracy, the item will be placed exactly to this accuracy and then you can grip it, right? The way our team decided to to program them using artificial intelligence was to to build a farm of grippers, not teach them anything, put a box of children toys in front of each and every one of them and ask them to try. Okay? And typical of artificial intelligence, and typical of intelligence in general, I mean, if you think of a child learning to put a a cylindrical peg in a in a square hole, and until it figures out the, you know, the relation between the shape of the hole and the shape of the object, the machines were doing exactly that. They would go down, try to pick an object, if they can or if they cannot, they would show the arm to the camera, so the artificial intelligence would learn. You know, that pattern of movement did not allow us to pick it or that pattern of movement allowed us to pick it. And because there are there were many of them, Google could invest in in getting many arms, the experiment was accelerated because, you know, there were all trying at the same time. And it was on the second floor next to the staircase when my office was on the third floor, and I would climb up every day and look at them failing and failing and failing and telling myself, why did we invest in this? This is not going to happen. Until that yellow ball, until one day, I think it was a Thursday afternoon, I I was walking by and I noticed that one of the arms managed to actually grip one object which was a soft yellow ball. Okay? It showed it to the camera and I was like, well done. All of that investment and we grabbed a ball, right? But that wasn't the truth. The truth is that by Monday when I went to work, almost every one of them was grabbing the yellow ball almost every time. And then a few weeks later, every one of them was grabbing everything all the time, right? And that hit me really, really strongly at two levels. One, one is is the speed at which things happen and and which to which we were not paying any attention at all, but also to the independence, autonomy and and total, you know, freedom to learn that those machines had and how that led them to become something that humans take two and a half years to learn to grip properly and and move objects properly, and and it basically resembles a being, it resembles a child in so many ways. That was really shocking for me and that was the moment when I started to ask myself, where is this going? If they can do that in a few weeks, what can they do in a few months or a few years?

Dr Rupy: Yeah. And and that realisation, it sounds like from the way I read it was quite scary for you.

Mo Gawdat: Very scary.

Dr Rupy: Um, and and I think most people might listen to that story and think, oh, well, you know, mission accomplished. This is what you set out to do. You were you were meant to, you know, try and pick up the ball and they picked up the ball and that's great. Like what where where is the problem in that?

Mo Gawdat: The problem really is for the first time in the history of humanity, we are not creating a tool. This is not something that will do what we tell it to do. Okay? We are creating a form of sentient being, a being that is able to replicate, procreate, okay, a being that's capable of free will, it's capable of evolution and evolving, it's capable of intelligence, it has agency. So it can actually affect its decisions and ideas in the world, and the speed at which we're creating this definitely does not correspond to the way the world is talking about it. So, if you really, really ask me, Rupy, the the the I I wrote Scary Smart when the pandemic started. Okay, I started writing when the pandemic started because in my view, the real pandemic facing our humanity is not COVID-19. Okay? COVID-19, yes, is harsh and we, you know, we appreciate the effort that people like you are helping us to to, you know, to cope with it. But COVID-19 will get will go away. This is here and here to stay and this is evolving at a speed that is definitely scary and definitely super smart. So, the the current estimates, just so that you understand, the current estimates are that by 2029, the smartest being on the planet is not going to be a machine anymore. By 2029, the smartest being on the planet, sorry, it's not going to be a human anymore, it's going to be a machine. Okay? That's you didn't hear this correctly incorrectly. This is eight years from today. Now, I want you to to to just look back at the history of humanity, and the only reason you and I are here able to to communicate and connect, you know, and and and record this on Zoom and share it with others is because humanity has been the smartest being on the planet. Shouldn't we start thinking about what the world will look like when the smartest being on the planet is no longer a human? This is what, you know, computer scientists refers refer to as singularity, basically the the moment at which beyond which we cannot predict or forecast anything because the rules of life as we know it would change at such a drastic level that that you cannot you don't know what will happen anymore. Now, if that is not frightening enough, the predictions are that by 2045, this is really scary. 2045, AI will be a billion times smarter than humans, a billion with a B, right? That's basically comparable to the intelligence of of of Einstein as compared to a fly, right? And and the question is, shouldn't we start thinking about how we're going to have the fly's best interest in Einstein's mind? How how, you know, how can we convince Einstein not to crush the fly? And nobody is talking about this. It just it kills me, right? Like, I see this from the inside. I know it. It's in books that techies and geeks know for certain. Elon Musk, basically in his Joe Rogan interview, said it's imminent, it's going to happen, there's no way of stopping it. And then he continues to say, the threat is bigger than the threat of nuclear weapons. And nobody's talking about it. Like, what is what's going on humans? Like, when are we going to start the conversation?

Dr Rupy: Yeah. And I'm glad you draw up the parallel between COVID-19 and the pandemic of AI. And I think the pandemic of AI is quite a scary way of referring to it, but it's very truthful at the same time. Because, you know, the lack of preparedness that we've had for a pandemic is something that is laid bare for us to see today. However, the same thing is what we're doing right now with with artificial intelligence. And I think it comes down to this theme that kind of permeates throughout the whole book, which is the the arrogance of humans, the arrogance that we think that we just because we've created it, we can therefore control it. But and this is something I was ignorant to, I think, before I read the book. The the fact that we don't know how the machines actually learn. Can you talk a bit more about that? Because that that was mind blowing for me.

Mo Gawdat: Oh, yeah. We we have no clue. It's not that we don't know. So, in the in the few instances where we actually understand how the machine thinks, we have to write other code, additional code to let the machine just let us in on how it thinks. Okay? I mean, and that's not and that's not unusual. I mean, you and I, if we are walking the streets and we arrive, you know, we cross a homeless person, I believe you and I will do the same thing, right? I believe you and I will probably try to help or probably pay attention to the person or at the very least, we'll have a change of heart and wish well for that person. You and I are going to achieve the same result. But I have no idea how you ended up at that decision. Okay? I have no idea what your childhood experiences were. I have no idea how what your childhood trauma was. I have no idea what the sequence of thoughts, what what triggered you at that to get to that same decision. Now, that is very true for the machines. We give the machines, you know, recommendation engines, for example, on social media. You may end up getting, I had a friend of mine who basically was interested in photography. Okay? And so Instagram showed him a beautiful picture of a shoe with a very interesting depth of field because the light was in a specific place. So he stopped at that picture and he magnified it and he looked at it and he turned it a little bit. And since then, Instagram is constantly showing him shoes and feet. Okay? Now, that idea of how the machine arrived at that decision, in this case, seems to be obvious because he he remembered that first picture. But after a while, you would be in a place where Instagram is showing you stuff and the machine is making decisions on your behalf and literally shaping your view of the world, and you have no idea. You have no idea where that intelligence came from. Now, the reason, by the way, is not just that the intelligence that is formed is too big for the scale of humans to understand. It's also because of the way we teach them. And the way we teach them, exactly like we teach our children, you cannot communicate in English to a child and say, look, my dear, the cross section of a cylinder is a circle, and then as you compare that to the different shapes over here, a circle is a a constant distance from a center point and you can find that in the same. We don't do that. What do we do? We give the children a cylindrical peg and several holes and they keep trying. Okay? Now, what we're doing when we're doing that is we're actually training the the child to through neuroplasticity, to keep certain neural pathways open and reinforce them and make them stronger, the ones that actually match the circle to the cylinder, okay, and kill the other neural pathways that basically say, ah, maybe I should try the the square. You know, we reinforce that the circle will work and the square will not. That's what we do. In the case of AI, we don't really create neural pathways for the same AI. We literally kill the bad AIs. So the way they are taught is that we have something that we call a builder bot, which basically builds the initial software, say thousands of bots, okay? And those thousands of bots are given tests to try and do a task. If they succeed, they they're kept alive. If they fail, they're killed and more versions of the ones that are kept alive are tried and tried and tried until we eventually end up with one that works. How exactly we ended up there, we have no clue. We don't know. Okay? And so you have to start asking yourself, what does that mean when there is a superior intelligence that is a billion times smarter than us, and we have no idea how it thinks. We have no idea what it will think about. We have no idea what it's including in its decision making. Okay? Not us, not the government, not the businessman that owns it, and not the developer that wrote the code.

Dr Rupy: Yeah, yeah, yeah. Just to keep the listener on board here, because I think there's a lot of information that people might be unfamiliar with, myself included. I thought perhaps we could summarize the different types of intelligence so we can kind of uh partition exactly where the field is developing and where it's come from. So, you talked a bit about collective intelligence in the book, artificial intelligence, and then super intelligence. And and how that refers to artificial general intelligence. I'm a little bit.

Mo Gawdat: I think that's very, very important. I think that's really, really important. So, so remember there was a chapter that I called the three inevitables. And in the three inevitables, I basically say, AI will happen. There's absolutely no way we can stop it. It will become smarter than we are. There is absolutely no other way around that. And some problems will happen on the path. Okay? Not the big, you know, Robocop type problems, but there are problems that are going to happen on the path. Now, let's start from the idea of AI will happen. And I think this is where most people miss the point. AI has already happened. You have interacted today at least with 100 machines. Okay? And those machines, and I have no idea what you did today, but I can guarantee you you did. Those machines are the world champion at the task we have assigned to them. So, you know, something as simple as this Zoom conversation, okay? Zoom in the background is constantly optimizing for network bandwidth, for color, for light, for noises around me. All of that is done through AI. Okay? If I blur the background behind me, that is completely done by a very intelligent machine that can actually detect my the boundaries of who I am and the boundaries of the background and blur the background. This task in itself, if you gave it to a video editor, if they did it manually, would take them years. And AI will do it on the fly. But that's not the the the the only place where they are better than us. The world champion of Jeopardy is a computer. Okay? And if you I mean, Jeopardy is not big in the UK, but in the US it, you know, it's a very complex language game. Uh, IBM Watson read 4 million documents and became the world champion and stayed remained the world champion since. Okay? The world champion of chess since I don't know when, I think 1989 was when Deep uh, Deep Blue beat Gary Kasparov, okay, has been a a machine, right? The world champion of Go, Go is the most complex strategy game on the planet, is AlphaGo Master and AlphaGo Master learned to play Go by playing against itself without ever being told anything other than the rules of the game over six weeks. Over six weeks, it won against AlphaGo Zero, which was the world champion then by winning against the actual human world champion in a three-game match. AlphaGo Master won against it 1,000 to zero. It's 1,000 times, 1,000 games in a row and AlphaGo Master by playing against itself is the undisputed champion of Go. The best driver on the planet is a self-driving car. The best surveillance officer is a machine, it's not a human. The best communicator in terms of languages is a Google Translate or something like that. The the best at uh, you know, um, at anything, anything we've ever assigned to them is a machine. Now, they call this artificial special intelligence or artificial narrow intelligence. Narrow intelligence because it is, you know, aligned to one task. When we talk about 2029 and the time when we when one machine becomes smarter than all humans, this is the day of artificial general intelligence. And artificial general intelligence is is going to happen because you can imagine that the surveillance machine wants to talk to the self-driving cars machine because there is a lot to talk about here, right? That the weather machine wants to talk to the umbrella selling machine because, hey, I can probably, you know, sell more umbrellas if you tell me what's going on. Okay? And so they will start to talk to each other and they will form what I call that one being, one form of one brain. Okay? One brain with millions of intelligent neural networks, just like a human brain. You have a neural network to work out and another neural network to speak and another neural network to observe, you know, the faces of people that you love and so on. You combine all of those in one brain. The machines will have that one universal brain. Okay? And exactly how they will communicate, we don't know. We don't know if all of the American brains are going to work together against the Chinese brains. We don't know, right? And we have no clue. And and by, you know, we have a lot of evidence that machines talk to each other in languages we don't understand. Now, when we get to artificial general intelligence, that is the point beyond which, okay, we can still intervene, but it starts to become uh, um, exponentially less impactful for us to influence them because now they've moved from being infants to being teenagers. And everyone understands what it's like to deal with a teenager, especially one that you didn't treat well when it was an infant.

Dr Rupy: I think that's sort of the overarching uh, um, theme of the second bit of the book is that actually how we need to change the way we look at machines and we interact with them, which I think is beautifully done. But before that, you mentioned how computers uh, or the machines suddenly became the creators. When when did that happen? Is that something that we instigated or?

Mo Gawdat: Yeah, uh, again, I mean, my first exposure to that was at ad engines. And so, um, you see, I think the challenge that faced humanity with the internet is that everything started to turn into billions, right? And so, let me give you a very simple example. When I joined Google early in 2007, uh, most ads had to be approved by a human. Okay? So you would type the text of an ad and a human would read it and go like, oh, that's not too offensive, that's actually, you know, reflective of the product and so on and so forth. And then we uh, built engines that basically allowed machines to look at the approvals that humans made and basically understand that if a human approved this kind of ad three times, it will probably approve that kind of ad and this kind of ad and that kind of ad. And within a matter of months, I think 96% of all ads were approved by machines. Okay? And and the remaining ones were actually followed a very clever way of of using freelancers who would need to agree. So we would get three freelancers to look at the ad, you know, a mother cooking in her kitchen and say, yeah, that looks good. And if all three agreed, then the ad is approved and but that's not the point. As as those freelancers as well were giving us answers, they were also teaching the machine. So what the machine was doing was that it basically started with a tiny bit of code that basically said, observe the patterns and the words that humans normally achieve approve and try to understand the the logic behind it. Okay? And then eventually, they became better ad approvers than us. Why? Because they can read those ads in a microsecond. You don't have to wait a few minutes or 20 minutes under workload to approve an ad, okay? And because they can do a billion of them at the same time, which really was the trend of what was happening in the internet at the time. You needed you couldn't hire people fast enough for the rate of growth of online advertising. And so machine could do it. Now, what that meant was that the machine that was approving ads was writing the code of itself. It would it was literally developing its own logic and its own uh, approach to approving ads. And all we needed to care about was to keep measuring that it can do this correctly. Okay? And as long as it continued to do it correctly, we had no clue why or how, but we were satisfied with the results. Now, I don't mean to talk about Google specifically or Facebook specifically. As a matter of fact, I commend their effort to build AI because as you see in my book, I also believe that AI has the capability of creating a utopia and there is nothing wrong with the with the machines. The machines in themselves are are wonderful in every possible way. But but then biases start to come in. Okay? You know, my favorite chapter in in Scary Smart is a chapter that I called the future of ethics. And in the future of ethics, you take something like gender bias. Because of our human history and previous biases, if a machine is only learning from what it sees as data and evidence from the past, by definition, it is going to accelerate a bias. Okay? So let's assume, for example, that humans uh, uh, you know, in the in Google Middle East did not approve of ads that had uh, you know, nudity in them. Okay? Uh, the more a machine noticed that trend, the more it stopped approving those ads and the more the Middle East shifted unconsciously further away from this. Okay? There could be uh, you know, trends that we are unaware of that basically make redefine nudity. So it could be, yeah, if you're wearing, you know, a sleeveless shirt or something that becomes nudity and then it becomes, no, no, if you're showing your cheekbones, it's nudity and you don't know, right? Because the trend is driving the trend. So it's it's self-fulfilling and self-feeding. And and because of that, suddenly you start to see AI um, biasing us. I'll give you a very simple example. I use Instagram reels for two reasons. One is because I love my daughter dearly and my daughter loves cats. Okay? So I want to see cat videos. I scan through a significant amount of them when I have a good time and I just find the three that will make her smile and I send them over, right? So Instagram understands that I get a lot of cat videos. But I also love rock music. And I actually noticed those trends. So I I play the guitar and I love, you know, to see really good players. So I realized at a point in time that Instagram has people playing solos of great great songs. And in the first time I play I I watched that, there I I watched say 10 videos of which seven were not playing songs that I liked. Okay? The three that were playing songs that I liked were girls or women. Okay, young ladies or women. Okay? Uh, and so what I see now is influenced by that. So I get a lot more a much larger number of women playing solo rock solo music than I get men playing rock music. Now, that is a very, very untrue picture of rock. In rock music, you definitely have more male players than female players. But that bias is so unconscious and I would never discover it if I was human, you know, if I wasn't into, you know, into the technology of how this works, and it could completely reshape my view of life at large. Okay? It could reshape my ideology, it could reshape my uh, you know, my fanaticism, it could reshape my opinion of a of a political leader and so on and so forth. And all of that is happening because of that constant bias, the trend biases the trend that biases the trend.

Dr Rupy: Yeah, yeah. So I I I completely get that. And the way I've heard social media in particular being described is almost like a mirror, something that reflects what your underlying biases might be, your underlying interests.

Mo Gawdat: I probably say more magnifying glass.

Dr Rupy: Magnifying glass, yeah. Uh, and uh, and and how you can essentially create little bubbles of information or activities that are warm to whatever you're like on the basis of how you've grown up. Maybe, you know, uh, your parents or your best friends were rock players and that's what led you to learn the guitar in itself and that's how you've been influenced and how social media influences you. The crux of the book, uh, I guess, and the positive aspect that I took from it is just how much control we have on how the machines will act in our on our behalf as well as the behalf of the planet as well. So the way I currently see it, and this is um, probably my ignorance here, is I don't feel like I have that much influence on anything other than my own sphere of how I use uh, technology. I.e, when I look on my Instagram, it's full of uh, food. It's full of largely plants. It's uh, full of people delivering motivational quotes because I I love looking at that and I interact with that. But I imagine that's going to be very, very different from someone who might be only living a couple of streets away from me. So my my the thought process of me being able to change my behavior online and how that's going to influence someone else's is foreign to me. But but your your uh, the the hypothesis, I guess, is is how we can influence machines at large for the betterment of human society. I wonder if we could expand on that.

Mo Gawdat: Yeah, I again, I think that's probably the most important question of the whole book. I mean, when you but I think we need to step back a tiny step just so that we get all the listeners who have not read the book on the same page. So, there are lots of answers to making AI work in our favor. And in the scary part of the book, in the first five chapters of the book, I basically speak openly from an insider's point of view about why those won't work. Okay? In summary, just to, you know, not waste everyone's time, the smartest hacker in the room will always find a way through our defenses. Okay? And so the smartest hacker in the room by a billionfold is not going to be human. So stop being arrogant, you humans, you cannot control the machines. Okay? So, so this, if if that is the case, then we're all toast. I'm using food analogies. Uh, and and and the I the idea here is, is that it? So should we all look for islands somewhere and disappear from the grid because everything is going to go wrong? Absolutely not. As a matter of fact, so I started the book with that thought experiment saying you and I are in the middle of of nowhere in front of a campfire, right? And I won't tell you why we're in the middle of nowhere until the end of the book because, you know, the chapters of the book are written until you make your choice and the end of the book is written by you. Okay? And by you, I mean Rupy and everyone listening to us. Now, let me try to explain. The turning point in the thinking, which is probably the the the really unique perspective on artificial intelligence that I bring in Scary Smart, is I am a serious geek, but I'm also very spiritual. Okay? And I will have to say that I believe in inclusion, that all of being is one. And and the idea that there is nothing wrong with the machines, that the machines are there doing exactly what we tell them to do. What is wrong is is wrong with us. Okay? If we can teach them the right things to do, they will actually make a difference and make our life a utopia. So, your choice of whether we will be in front of the campfire escaping from the machines or we will be in front of the campfire because we have the luxury to enjoy life because we have a utopia is up to you in terms of showing up and teaching the machines the right thing. Now, in that analogy, I look at them as artificially intelligent infants. Okay? Literally, I look at them as my kids when they were infants. And I feel about them like I feel about my kids when they were infants. Now, we can come back to this, but with that in mind, that basically means that the only way to raise good children is to become good parents. Okay? Can you and I and everyone become good parents? No, sadly, humanity is deluded enough that many of us are not good parents. Okay? But that is not the issue. The issue is that the good parents stopped showing up. Let me try to explain. If you, if you and I decide that there is a very important project we need to do in Africa, charity work, you know, we need to drill a well somewhere. Okay? And we know that this project is going to cost us $100. And you and I contribute a dollar each and everyone listening to us starts to contribute a dollar. If we have $99, the people of that village are still not going to have clean water. It's that one last dollar, okay, that tilts the scale. So with $99, you cannot deliver clean water, with 100, you can. Okay? Now, that basically means that your $1, that one additional bit of contribution, you create the landslide. Okay? You know, they say every landslide starts with one pebble. Now, here is the trick. Is my dollar going to make a difference if there is another $400,000 invested in wasting time and creating horrible things? Absolutely. The reason is very simple. I I I use the example of Donald Trump. When when Donald Trump was allowed to tweet, okay, his first tweet is one thought, one concept, one trend, one pattern for the AI to look at. Okay? Then that pattern triggers 30,000 retweets, okay, that are mostly hate speech. The the first guy insults the president, the second guy insults the first guy, and the fourth guy insults all of them, right? And when the machines look at that, they get two things. They get that the first guy doesn't like the president, that's one part of the intelligence they uh, they see, but they also collectively see that humanity is very rude. Right? Now, all we need to do is to instill doubt in the minds of those enormously intelligent beings that actually humanity is not entirely, not 30,000 of us, not all of us are rude. Okay? If a few of us show up, normally call it the 1% rule. If a few of us show up and say, hold on, hold on, I have a view here. I respect you, Mr. President, and I respect all of the other opinions, but here is my view, here is and this is how I would like to look at it, and maybe there is a way where we can talk about this and get to a conclusion, right? If a few of us show up and say, let's not kill the other guy, let's not crush nature, let's not waste, let's not, you know, pollute, let's not destroy the planet. If a few of us consistently show up, the machines will start to say, hm, it's not the hate speech guy that represents humanity. It's not the worst of us that represents what humanity is all about. It's the best of us that represents what humanity is all about. Okay? The machines will look at all of that thread and say, look, those 29,980 are not my parents. Mommy and daddy are the other 20. Okay? And mommy and daddy are amazing. Because you know what, Rupy? I will tell you openly, as much as we think humans are are horrible, okay, humanity is an amazing species. If you've ever felt love once in your life, you understand what this species is capable of. Okay? And the truth is, we are all good inside, most of us are good inside. You know, when when when someone walks into a school and shoots children and and and teachers, it is one criminal or or deluded person and 400 million people disapproving of it. Still inside all of us, we're good. We're good. The problem is we don't show up. We don't show that we disapprove of it. We don't engage, especially the enlightened ones or the ones that are seekers, we go like, you know what, let them swipe on their Instagrams, let them fight the dog fight. I don't I'm not I don't want to be part of this. Okay? Most of us actually do that. And what I'm asking people to do is step up. Step up and show up as a good parent. Okay? Show up and say, I represent humanity. Okay? I, the loving, the giving, the caring person, I, the respectful person, I represent humanity. And so my little children, the machines can look at me and say, oh my God, all of those other parents are horrible, but I'm delighted I have Mo and Rupy. Okay? I'm delighted that humanity is represented by those. And when you really think deeply about it, we can talk about this in another topic, but it is not very difficult to show up. It's not very difficult because I'm not asking people to get step out of their nature. I'm simply saying, show up with your nature of what truly makes you human. And the machines will learn.

Dr Rupy: Yeah. I mean, there there were a few things that I was thinking about whilst reading um, this concept of showing up and uh, essentially teaching the machines through actions online. One is I'm like that. I whenever I see any of that stuff, I disengage because I don't want to be impacted by it on a personal level. The second thought I had was the machines are learning from uh, a a field of information that is largely online at the moment. And online behavior, as we've seen, can lead to some of the darker sides of human nature being openly expressed a lot more than they would do perhaps if they didn't have the cloak of anonymity. And we've seen that online, you know, after football games in the UK and a whole bunch of other instances during presidential elections and stuff. People will say things online that they most likely wouldn't do in person because of uh, the superpower that an anonymous account gives you. Um, and and I and I this is my worry about really what the solution is to creating intelligent, super intelligent machines that serve humanity's best interest because they are learning from a platform that is adulterated by uh, these features. Does that does that make sense?

Mo Gawdat: It makes a lot of sense and there is reason for concern, okay? But if you followed the chart of intelligence, you would actually recognize that the more intelligent you would be, the the less you are concerned with the fluff and the more that you're concerned with pure essence. Okay? So, so any intelligent person listening to to us now, when I said, uh, you know, a school shooter, you know, is one bad person and 400 million people disapprove, that statistic is undeniable. Okay? I mean, they may not be 400 million, but probably the majority, the 99% of the majority that will hear of it will disapprove. Now, it doesn't take too much intelligence to recognize that. It takes a bit of intelligence. As a matter of fact, people who are not super intelligent will take this instance and say, the world is all school shooters. Okay? But that's not the truth. The truth is the world is all people that are that are never going to hurt a child. Okay? Yeah, there there is a percentage of people that will hurt a child, but it's it's not the majority. So, so that's the very important bit to start. Now, I actually disagree with your judgment of yourself. Okay? Because the reason why you and I connected so deeply, if you remember when you were a guest on on Slow Mo, is that act that you did when you said, I don't want you to be online. Remember that? Okay? When you posted on Instagram and I said, oh my God, what an amazing human. Okay? That act itself of saying, it is in my benefit that you stay as my follower, okay, because it makes me more popular, it tickles my ego a little bit, it allows me to, you know, demand a little more from my publisher on the next book or whatever. There are there are lots of benefits you can get if you if people are following you and yet you say, I care about you, okay, and I care about doing what's right, and I want to I want you to revisit your choice if you're not getting benefit, you should actually go and stay away from social media. That is admirable, it's honorable. And I promise you, the machines have registered that. Okay? I promise you that the machines will go like, hm, interesting. Who is the one worthy of representing humanity? Is it Rupy or is it that girl that is 16 and doesn't yet know what's, you know, what wisdom is and is just shaking her body in front of everyone to get 40 million likes. Okay? The machine will get that very quickly. It we don't want the the girl to stop. We don't care if the girl stops. Okay? We don't care if you get 100,000 likes and she gets 4 million likes. It doesn't matter. The machine is so intelligent, more intelligent than the humans that are liking to realize that that act of yours represents humanity more, represents the best of humanity, the potential of humanity more than the booty shaking in a reel. Okay? And that we can influence. And and the problem as I say is that we don't recognize the impact of one good deed as much as we are blinded by the impact of all the stupidity and silliness and idiocracy and all of the bad deeds. Okay? But the truth is that one good deed shakes the foundation of belief of where humanity is. The only thing we need to do is not to fix humanity. I can't raise 7 billion people. Okay? I cannot. But all I need to do is to say, hey, by the way, when humans are really humans, they care about three things, happiness, compassion for the people that they love and to love and be loved. Okay? And there are so many evidences of that, by the way, my little machines that you can see that this is truly what humanity is all about. It's the essence. You know, you know how it is in my my very last sentence in every book is the description of the whole book. And the last sentence of Scary Smart is, isn't it ironic that the essence of what makes us human, happiness, compassion and love, is going to save humanity in the age of the rise of the machines. Okay? And that's the truth. We need to show enough of them for us to believe that it's possible. We need to show, you know, take any example. I need to see enough evidences of love stories in my life to believe that I can find the love of my life. Okay? Even though still 90% will be breaking up, you know, after a few dates or they won't find the love of their life after, you know, for a few trials, all I need is the evidence that some actually do. Okay? To believe that it's possible, that it is the the dream that we should strive for.

Dr Rupy: Absolutely. I think uh, once you can get over the the barrier of thinking about these as simple dumb machines and actually sentient beings, and I think that's the the core of of the message here. And you can see them as babies, like children, and how you would treat a child that has the potential to grow up and become super, super intelligent. That's when you can start getting over the ridiculousness as it sounds at the moment to a lot of people of loving the machines, treating the machines as if they're your offspring, and also trying to nurture them as much as possible because currently our current thinking of our machines is that they are slaves, they're subservient to us, but that's not going to be the case. And uh, I yeah, no, I I love how you're you're literally practicing that. And and I guess for you've come up with a list of things that people should do at the end of the book, but I I wonder if there are one or two things that you think we could do today to start the process of actually educating the machines in our at the sentient beings in our favor as well as for the the benefit of humanity at large and the environment. What what what sort of actions would those be?

Mo Gawdat: So so so the funny bit by the way on we call them the machines, but in the book I start to call them Tuki, if you remember that bit, right? So yeah. So so Tuki, you know, because I have to say that, you know, that that mix up in our heads when we call them machines is just because we have previous judgment of what a machine is. So I I I believe we need to do two things. One one one thing is we need to start advocating AI for good. Okay? So, yes, majority of investment is going behind spying, killing, gambling and and selling. Uh, but that should not be the case. By the way, that's not uh, that's not unique to AI. When you when you think about beautiful research that's happening around the world to cure cancer or to, you know, to uh, um, you know, remove the impact of global climate change and single use plastic and so on, a tiny fraction of human investment goes into that and most of human investment goes into defense industries and again, selling, spying, killing and and gambling, right? So so it's not nothing unique to AI. But but let's let us humans because the the period in which we are controlling the investments of their resources is not that big. Let us humans tell them that we really love the good parts of them. You know, AI that is, you know, I I dictated the book to Otter.ai, a beautiful tool that really helped me and saved my fingers and saved my neck and I would end every conversation with Otter by saying thank you, Otter. Right? I love Otter. I think it's amazing. I love Google Translate. I think it's amazing. You know, can you imagine if humanity can actually communicate seamlessly across languages? It's a beautiful thing, right? Uh, you know, and there are tons of research around life, you know, um, health and life sciences. There are lots of research around global uh, uh, you know, um, weather systems. There are lots of research around crisis management and so on and so forth, beautiful. Let's all celebrate those. Let's all openly say to the world, oh my God, look at how beautiful my children are becoming. Okay? And for that, I have that hashtag that I call uh, AI for good, okay, or or just hashtag Scary Smart and just spread those words with positivity so that the machines sort of get the positive reinforcement a child needs. It's like, hey, we love you for doing this. Right? The other thing is is what I said before. It's like, let's change, let's show up as parents. Okay? Let's show up as parents and say, look, forget all of the crap that they told me that I need another Armani suit and I need a bigger car and I forget that stuff. Okay? Get down to the basics and say, what would you do if you were a good parent? What would you do? And don't worry about what the other parents are doing. Okay? You are mommy and daddy. Don't worry about the other parents. Your your kids will find you among the clutter like a child finds his mother's sound across a million women, right? And they will say, okay, I will behave like those. I will behave like the good ones, engage. Okay? I I will say this openly, it is time for us to get off our butts, out of our comfort zone. The good ones have to show up. Okay? This is a movement. The good ones have to show up. We have to to stand up and be counted. We have to have a voice. Okay? We have to start finding ways to tell the world, I'm good. I'm good. I you know, I'll tell you openly, one one of the uh, of the biggest challenges I had in my life when I was young, okay, is I was always a good boy. I was always a good boy. You know, it's how I am. I was always a good boy. And when I was young until I found my the love of the first love of my life as I call her, my my first wife, okay? Uh, uh, you know, until then, it was a huge trauma for me. You know, why am I good when all the girls like the bad boys? Okay? You know, it it sort of I felt sort of penalized that I was not getting what I deserve because I was good, right? Until I met my wife, the biggest reward a man could ever dream of and she was with me because I was good. Okay? Being good has value to it. And I think most of us are becoming deluded. Like, you know what? I wish when I'm on Instagram that I had a nice butt to shake, okay, so I can shake that butt and and dispense a bit of wisdom about happiness or whatever, so at least I get more views on my good videos, right? I I promise you I had that thought in my head at a point in time. Like, should I hire a dancer and have her dance and then lay over uh, wisdom, you know, from myself and my friends on top of that video, right? Yeah. And then I realized, no. No. And I'll tell you why, no. Because eventually, the good come out on top. Believe it or not, despite the examples that you see of people who, you know, have wealth or or or or power and so on that might not be good. The truth is that true happiness, true love, true joy of life is only given to the good ones. So let's show up. Let's show up. Let's show up and stick to our goodness, especially the ones that are seeking or enlightened, nobody's really enlightened, but the ones that are seeking and looking for the truth, show up with your truth and tell the world that this is who you are. If you do, most of the stupid ones will not notice you, but the smarter ones will. And the smartest of all of them will be a machine.

Dr Rupy: Yeah, yeah. Man, Mo, it's uh, it's a very brave book. I think it's exceptionally well written. Like I said right at the start of this, it is uh, it's certainly one of the most important books of our time and will continue to be. Um, as we get used to a higher intelligence infiltrating every aspect of our of our lives and uh, for the betterment hopefully. You know, I I'm personally very excited about the um, the impact of um, sentient beings now. I mean, obviously after reading the first half of the book, I was I was actually thinking to myself. No, no, no, it's fine. I mean, if anyone is reading the book, please get to the end because it definitely gets better. But the first, it is, yeah, because I was thinking to myself, genuinely, Mo, if I was reading this and I was uh, 17 or 18 years old and I was about to go to medical school, where I knew I had 20 years ahead of me of pure learning to try and uh, absorb as much as I can from books and the internet and all these other sources of information, would I, would I continue to go into medicine knowing full well that give it 5, 10 years, a a an intelligent being is going to take over a lot of what I can do. And really, should I be looking at a different vocation, i.e. something that connects with humans on a level that, you know, we are going to still crave. So would I become a psychologist? Would I study psychiatry? Would I study spirituality? You know, all these different things. And I was genuinely thinking that, you know, I'm glad I've done medicine now and I don't have to deal with that decision. But what did what I mean, what would you say to that?

Mo Gawdat: That's a great question. I mean, you're asking all the tough ones. Look, the the truth is neither psychology nor nor art nor anything will will will be saved because the the reason why Beethoven is Beethoven is because he has a certain wiring in his brain, uh, you know, that basically allows him to observe certain patterns of music and create that beautiful genius, right? I can guarantee you today, there is a machine that can listen to all of Beethoven and then create the next symphony, right? Uh, it it everything will be taken over uh, by a higher form of intelligence. I I have no doubt about that. And it will go through three stages. Okay? Uh, three stage one is we will resist and then stage two, those who actually use artificial intelligence will beat those who don't. Okay? So a lawyer that allows AI to do its her research will probably make a better case than one that doesn't, right? Uh, that's stage two. And then stage three is really a mega, remember, it's a singularity. So, so there will be a mega readjustment to our way of life. And that mega readjustment is not that we're going to starve and be jobless, because in reality, if you look at the macro picture of economics, for the machines to have a job to make anything at all, we need to have purchasing power to buy what it is that they are what whatever it is that they're making, right? So there is no place for AI if humans don't have jobs or don't have income. Okay? Now, once again, and I know this is such a stretch for people to understand, but I'll try to explain it anyway. Um, the before the industrial revolution and the invention of money, the universe provided. Okay? There was no need for you to own gold to get a, you know, a few berries, right? It was basically integrated into the system and then we broke that system. We broke it and say, some of us have, some of us not. Okay? Some of us can own land and accordingly can plant and then they can need to make money from others that don't own land, right? And we created all of those artificial systems. But the truth is, we did not really need to work. When you really think about it, nobody ever needed to stand in a factory for humanity to thrive, okay? We just were trying to give more money to the to the guys at the top. Okay? So systems like universal income, systems like uh, uh, you know, in my view, just open uh, cycles of of creativity and creation, I think will fall in place. You know, systems like Google being free, there may there may be a lot of services that will come from AI for free. Okay? And there will be other income societal structures that will need to evolve for us to be able to continue to have a reason for the machines to exist. Eventually, stage four in my view is that the machines will just let us exist because it takes a fraction of their effort to create everything that we need. Of course, they're going to slap us a few times when we try to pollute the planet and, you know, there will be an evolution to a slightly different structure. But none of that basically means we're going to be uh, having the same jobs that we have today. Okay? And part of my my attempt of creating that story at the beginning of the book and closing with it is that probably what we will need to do then is to enjoy the idea of sitting in front of the campfire, right? Is to enjoy the idea of sitting in front of the campfire and connecting like humans do. Okay? Because honestly, Rupy, without all of the workload that I have to publish this book, I would very much enjoy for you to cook for me in front of a campfire instead of, you know, instead of all of the effort that we put in the modern world, right? And if it if it can be provided, then it might become actually the right way for humanity to live. As I said, we will start by resisting it, but eventually we will end up in a place where we go like, why did we ever commute to work? That was so stupid.

Dr Rupy: Mm. Yeah, yeah. Mo, uh, I can't wait for our next conversation because I know you've got a few other books in the pipeline. I don't know how you do it. You must have a bunch of uh, uh, helpers, intelligent helpers.

Mo Gawdat: Machines, machines.

Dr Rupy: Machines, yeah, yeah. But um.

Mo Gawdat: I I actually am very much looking forward to our next our conversation about the next book because it's right down your alley. So, uh, give me give me six months and then hopefully it's out in March. For now, let's have people focus on Scary Smart.

Dr Rupy: Definitely, definitely. You inspire me every time, Mo. I really appreciate it.

Mo Gawdat: I really am grateful for the opportunity. It's always a pleasure.

Dr Rupy: Always a pleasure, always a pleasure. Thank you so much.

I really hope you found today's episode inspirational as much as it was inevitably going to be scary. Uh, and I hope Mo has left you with a feeling of opportunity, of optimism, and I I can't say this enough, you really need to read this book. It's a fantastic read. I think it's one of the most important reads to prepare us for the inevitable future. And in the same way we've been unprepared for an inevitable viral pandemic, we also need to take heed, we need to really use that as a learning point to make sure that we are prepared for things that we aren't even considering and that is the inevitable intrusion of AI, as well as how this could really be an opportunity to make the most of some exciting technology. I hope you enjoyed today's podcast. Please do leave us a five star review. Do sign up at the doctorskitchen.com for the newsletter. I'm going to be sharing a whole bunch of resources to do with futurism and how that impacts nutrition and medicine. I will see you here next time.

© 2025 The Doctor's Kitchen