Purpose, Pleasure, and Meaning in a World Without Work (with Nicholas Bostrom)
May 20 2024

61JDipQ6BJL._SY522_.jpg If you didn't have to work to enjoy material abundance, would you do it anyway? If an algorithm or a pill could achieve better results, would you bother shopping or going to the gym? These are the kinds of questions we'll need to ask ourselves if AI makes all human labor and other traditional ways of spending time obsolete. Oxford philosopher Nicholas Bostrom, author of Deep Utopia, is downright bullish about our ability, not only to adjust to a life stripped of labor, but to thrive. Listen as Bostrom explains to EconTalk's Russ Roberts what pleasure and leisure might look like in a world without struggle or pain, and why art and religion may come out still standing, or even become more necessary. Finally, they speak about how AI might free us up to be the best people we can be.

RELATED EPISODE
Erik Hoel on the Threat to Humanity from AI
They operate according to rules we can never fully understand. They can be unreliable, uncontrollable, and misaligned with human values. They're fast becoming as intelligent as humans--and they're exclusively in the hands of profit-seeking tech companies. "They," of course, are the...
EXPLORE MORE
Related EPISODE
Can Artificial Intelligence Be Moral? (with Paul Bloom)
It seems obvious that moral artificial intelligence would be better than the alternative. But psychologist Paul Bloom of the University of Toronto thinks moral AI is not just a meaningless goal but a bad one. Listen as Bloom and EconTalk's...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Justin
May 20 2024 at 10:40am

The premise seems to be “Technology will solve every problem except one: how to live a meaningful life.” When wondering how we’ll spend our time, in Bostrom’s magical “solved” world, every suggestion is shot down with “but you’ll take a pill to replicate that” or “medical technology” or “brain stimulation” or fill-in-the-blank magical technological solution—except for the problem of meaning. Given all the problems which are simply hand-waved away, Bostrom utterly fails to give any reason why his magical pill won’t solve the problem of meaning, too.

Weakening his argument, it seems Bostrom deliberately picks examples with which he doesn’t connect in order to use them as straw men (shopping, for one, which he admits he doesn’t care for). Why engage in physical activity (like going to the gym, as in his example), if a pill will give the same results and feeling? Because that feels like cheating. The option already exists and some people do take it, but plenty more don’t. Tomorrow I could start on a cocktail of drugs that would make me fit and strong without having to work at it. I choose not to do so because it feels like cheating. I don’t want my best marathon time to be better with drugs, I want it to be the product of my own effort. No amount of magical AI is going to change that. Same with baking my own bread, or building a bookcase I designed myself, or teaching my kid to ride her bike, or spending time in actual conversation with my actual, flesh-and-blood loved ones. It feels good to do things ourselves and it always will.

Fundamentally, Bostom misunderstands human desire. He thinks we only want to have the best things. While that’s true for lots of stuff—I can’t wait for the day when all this magical technology will keep my house clean and tidy without my having to lift a finger—it’s also false for many things. For my part, I don’t want the best bookshelf; I want a bookshelf I designed and built myself. That will always be more beautiful to me than anything technology can magic into existence. Between individuals, those desires are non-overlapping—what I want to spend my leisure time on may not be what he wants to spend his on—but that doesn’t mean that nobody will want to do anything because AI will do it better.

Bostrom’s argument seems to be just an AI-version of “assume a can opener.” Everything is magically solved except for this one problem which, by his definition, can’t be hand-waved away like everything else. If you agree with this massive assumption, it’s probably an interesting conversation. If you have any questions about the underlying assumption, which both author and interview utterly fail to probe, this conversation floats off into space like so much hot air.

Daniel
May 20 2024 at 5:51pm

I didn’t find this to be incredibly convincing. Talk about post-scarcity has been around since at least the industrial revolution and has been wrong many times. Folks who want us to believe it is still coming need to convince us that they understand what their predecessors got wrong and that there’s convincing evidence that those conditions will change. It seems to me that most theories (like Robert Owens, who supposed that industrial machinery would bring about a post-scarcity age of leisure and prosperity) hinge on some pretty fundamental misunderstandings about human nature and how we behave. They tend to under-estimate our potential demand for products that have yet to be invented (which quickly gobble up productivity gains), our deep desire not only for goods but for positional status (which drives competition even when productivity goes through the roof), and over-estimate human willingness to distribute wealth amongst the masses who are no longer a meaningful input to the economy.

Multiplying productivity by infinity using super-inteligence may change some of the conditions, but infinity is thought experiment land and not any practical future in the next 1000 years. Really, post-scarcity depends on substantial productivity increase accompanied by a change in human nature, and so far nothing in recorded history has changed human nature.

DJ
May 20 2024 at 6:31pm

Enjoyed this one a lot.

To Justin above, he’s not predicting everything will be solved per se. Rather, he’s asking what technological progress means to human happiness. It’s rather like Hume’s is/ought problem. Just because we can’t get ought from is doesn’t mean all moral systems cease to exist, even those based on rationality.

Shalom Freedman
May 21 2024 at 6:17am

This is a wonderfully interesting conversation on certain themes of Bostrom’s remarkable book. Bostrom is gifted in many ways, and one is his especially rich use of the English language. Russ Roberts shows his usual effort at truly understanding the work of the person he is speaking with, and not simply going along with everything his guest says but presenting objections, questions thought of his own.
Bostrom’s brilliance and originality are everywhere present in this book and conversation. However, it as he surely knows a pyramid of speculations and improbabilities which lead to a Deep Utopia which will probably never be.
The idea of all technological questions solved which is a key premise of the first part of the book seems to me mistaken in conception. If (Godel’s theorem) a complete and non-contradictory mathematical system cannot be built how can all technical questions possibly be solved? If there is continuous invention of invention, how is it possible for no new inventions to come? If humanity and digital intelligence are evolving, how can they stop creating new questions and problems?
The whole business of providing pills which give people everything they want including transformations of themselves into being the kind of being or beings they want to be, and having the kind of relationships they want to have seems to me nonsense if we consider what human life is. Human lives are individual stories filled with uncertainties and surprises. Human beings are not just brains even if the brains are transformed into silicon but bodies essential to their being and sense of themselves in every way. If the road to perfection means the end of our having bodies, it means the end of what we are.
As Bostrom himself said in an interview with Robert Lawrence Kuhn the heat-death of the universe might mean there is no eternal Utopia deep or not. As I understand the present consensus view is that humanity cannot possibly know whether the universe is finite or not, or whether there are other universes, which we in any case, cannot communicate with. What we do know is that already the largest part of the universe is no longer observable to us and if we are around that long, which is highly unlikely, eventually almost the whole sky will be dark for us.
Russ Roberts gives the feeling that Bostrum’s work is such a gift of creative thought that it has made him less pessimistic. I wish he had expanded on this thought.

Trent
May 21 2024 at 3:55pm

Given Russ’ interest in discussing religion and given that he has said multiple times that he doesn’t know much about Christianity (including in this episode), I’d like to suggest Andy Stanley as a future EconTalk guest.  He’s the pastor and leader of Northpoint Church in Atlanta that he founded back in the 1990s.

Of course he could answer Russ’ questions about Christianity, but there’s far more to his story that I think Russ (and fellow EconTalk listeners) would find interesting.

There’s the father/son dynamic that Andy had with his father, Charles, who was the leading Southern Baptist minister, and who had his own TV ministry.  They didn’t see eye-to-eye on many issues, especially when Andy started his own nondenominational church.

There’s the dynamic of leading a large organization – the church has thousands of members itself, as well as dozens of affiliated churches, including internationally.

There’s the dynamic of teaching leadership skills, which Andy does via seminars and a separate podcast.

There’s the shared dynamic of writing multiple books, which Andy has done, like Russ.

And there’s the dynamic of Andy reaching out to all areas of the political spectrum – like Russ, he copes well with people who may disagree with him.  Andy has said in many ways that people who may disagree over political issues can still come together to be united in one religion.

I think there’s plenty of material here for Russ to have an engaging, interesting 60+ minute conversation.

Earl Rodd
May 22 2024 at 1:06pm

Bostrom seems to be suggesting a world with features of Brave New World and Edward Bellamy’s best selling  late 19th century utopian novel, Looking Backwards. Besides how hopelessly optimistic Bostrom is about technology, he never addresses the reality that so many new technologies start out with such hope for all the good that can be done end up also being used for nefarious purposes. Radio, TV, the Internet all started with high hopes as purely educational technologies. In they end, its a mixed bag of profiteering from human weakness. Bitcoin ended up primarily a tool for criminals. Most of the emails sent are spam. The Internet is used daily to by thieves.

To me, the area where Bostrom is most hopelessly optimistic is controlling the human brain chemically.  One only needs to look at the treatment of what should be a simple case, correcting serious malfunctioning seen with serious mental illness. Currently, our understanding is so poor that there are no biological tests for mental condition diagnosis. Treatment is equally subjective. The effects (and side-effects) of psychiatric drugs are very unpredictable. This is a long, long ways from pinpointed affects like feeling a certain emotion with no other effects.

Adam
May 23 2024 at 12:51pm

Seems to rest on the assumption that we are steadily increasing our technological powers to a) control nature at a large scale, b) act into nature without unintended consequences, c) use clean energy, d) control human brains/minds in precise and predictable ways, e) increase average lifespans, f) make the economy rational etc…. I could list many more. But are we getting more powerful in these areas, actually?

Ron Spinner
May 28 2024 at 12:19am

…then I think we could conceive of better ways of raising children, maybe cultivating the appreciation of art and literature, the fine art of conversation, a sense of humor, physical activities, appreciation of nature, all of these things that currently are relegated to the sidelines.
As a thought experiment, I was thinking how this would effect people who think everyone should share their belief system. Could be a religion, or communism or climate change apoclypsism. Would they take a pills to make themselves think they achieved that goal? It doesn’t seem that this would do the trick for them.

Comments are closed.


DELVE DEEPER

Watch this podcast episode on YouTube:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:

More related EconTalk podcast episodes, by Category:


* As an Amazon Associate, Econlib earns from qualifying purchases.


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:37

Intro. [Recording date: May 1, 2024.]

Russ Roberts: Today is May 1st, 2024. My guest is philosopher and author Nicholas Bostrom of Oxford University. This is his second appearance on EconTalk. He was first here in December of 2014 talking about his book Superintelligence, about the dangers of artificial intelligence--of AI--and he was way early in worrying about that.

His new book, and our topic for today, is Deep Utopia: Life and Meaning in a Solved World. Nick, welcome back to EconTalk.

Nicholas Bostrom: Thanks. And, there it is, yes. No. 2014. It's been a while.

Russ Roberts: Yep. And, I'm a lot smarter since then. I'm sure you are, too. But neither of us are quite super-intelligent. But there's some things out there that are interesting. But that's not our topic.

1:24

Russ Roberts: Our topic is your new book, and I want to say before we start, this is a wondrous book. It is mind-expanding. It is poetic. It is moving. It is funny. The writing is superb. Every page is full of ideas. And it's about what you call 'deep utopia'--the world we appear to be heading toward, a world of unimaginable material abundance--robots, AI able to do everything better than we do--which seems to lead to a world of nearly, if not literally, infinite leisure.

And, in such a world, you ask, among many things, what would become of us then? What would give our lives meaning and purpose in a 'solved world,' as you call it in? And, what would we do all day?

And I think, as you point out, thinking about these issues forces us to think about non-utopian issues--the question of how we live our lives now and what is for many of us near-utopia; and for many, not so close.

But let's start off by trying to say something very encouraging. I think many people are extremely uneasy with the idea of a solved world--of the idea of all the leisure we want all the time. Talk about why people worry about that and why you think those worries are not as worrisome as some think.

Nicholas Bostrom: We've built our conception of ourselves and our dignity to a large extent on the idea that we can make some sort of useful contribution to the world, whether it's at a large scale or just in your family or in your community. And so, to the extent that that's the foundation of your self-worth, if that foundation is removed, you might find a kind of vacuum underneath your feet that would be disconcerting.

So, in a solved world where all the practical problems have already been solved, there are no more problems for us to solve, or to the extent that there remain any practical problems they would be better taken care of by advanced AIs and robots. And so, either way, there would be nothing of practical utility that human work would be needed for.

So, that forces, then, a fairly radical reconsideration of what the foundation of a good human life could be. And, undoubtedly it would require jettisoning some treasured assumptions about what the life should contain. And, I think that's part of where this sense of unease would come from.

There are also, perhaps, more mundane concerns that people would have in terms of if they couldn't have a job, how would they make a living? But, those I kind of set aside in this book in order to be able to actually get to the point where we can ask about the fundamental values that are at stake, assuming we solve all the practical difficulties that lie between where we are now and this hypothetical outcome.

Russ Roberts: And, somewhat reminiscent of the conversations some people have had around the idea of Universal Basic Income [UBI] where many people have suggested that even if we could, quote, "afford" it budgetarily to take care of people at a fairly high minimum standard, it would be a mistake because people then have no reason to work, and then they would--I don't know if you quote it or not, but there's a certain, I think, cultural feel that idle hands are the devil's workshop. But, you disagree, I think, and quite eloquently.

Nicholas Bostrom: Yeah. Well, I think it might be true for some people in that more leisure is bad and some people need, I guess, a lot of external pressure to retain their upright posture, psychologically speaking. And, I am, I guess, guardedly hopeful that if this were the only change--say, suppose there was some huge economic windfall like the way that some Gulf State or something finds in enormous wealth underground and then they can live off the rents from that. Sometimes it works out well and sometimes not so well. I'm guardedly hopeful that, at least in theory, we could have a cultural change that would allow people to be raised to have good leisure rather than to be productive workers. I think the school system right now is obviously, to the extent that it is aiming for any particular outcome, is aiming to produce disciplined workers. L,ike you're told[?taught?] to sit at your desk. You are assigned little tasks to do that you have to do. Why? Because the teacher tells you to do them; and then you're scored and graded and there's a quality control stamp at the end of it.

And then, you can go on to work in an office building or a factory depending on what level of--this is I think hopefully by the light of the future a sad model of what human development could be. It's kind of necessary now because there are all these jobs that need to be done, so we need to have workers that do them. But, if you imagine a scenario where there weren't all of these jobs that needed to be done, then I think we could conceive of better ways of raising children, maybe cultivating the appreciation of art and literature, the fine art of conversation, a sense of humor, physical activities, appreciation of nature, all of these things that currently are relegated to the sidelines.

7:17

Russ Roberts: And, it's--one of my favorite themes is that culture is emergent. It's not under anyone's control. It responds to all kinds of forces in the world, economic improvement and so on. And, we'll talk more about this because such an interesting--you have some very interesting things to say about how things might change in a different world.

I think there is an interesting question always of the speed with which culture could change relative to the speed of technology. There could be a great deal of suffering and challenge if that culture responds slowly.

The example that I think of a lot is smartphones. When smartphones came along I, like others, would sometimes say, 'Oh, well, our culture will change about what you can do.' And, it has changed; and it's changed that you can sit there by yourself looking at your phone and answer your phone in the middle of meetings and scroll through your social media feed when you're at dinner and in all kinds of ways that don't seem to me to be consistent so much with human flourishing.

And, I say that as someone who does some of those things some of the time. So, the cultural norms I think ultimately will change, but I think there is a question of speed--but maybe we have lots of time.

Nicholas Bostrom: Yeah. So, I'm not saying we would use the increased leisure for positive life affirming, and so I'm[?] saying that in theory there seems to be this possibility.

So, I think there are kind multiple layers of this onion, and what I see as the outermost layer--which is where most of this conversation both begins and ends--is we are considering some scenario of moderate increase in automation. Maybe some people get unemployed, and then maybe the answer to that is reeducation or flexible labor market so they can find new jobs.

Like, we have many jobs today that didn't exist a hundred years ago when almost everybody were farmers, so, like, analogously to that. And then, combined with some kind of cultural adjustment to the new technologies. So, we hopefully encourage more positive usage. So, I think if the technology were basically frozen in time where it is now or a couple of years more advances and then just applications, I think that would be the right question and the right answer.

But, I think this is really only the first step on a process that will keep going until it ultimately reach its logical terminus, which is not just that a few jobs are automated, but that basically all economic labor--with a few exceptions that we could talk about--is done better by machine, and where you would have a post-work condition where humans no longer need to do any work for the sake of earning an income.

And so, that's kind of reaches the second layer of the onion, if you want. It's a slightly more radical conception: not just some reallocations where the labor is going, but the actual cessation of the need for human work.

But even that, it's just an intermediate layer in this onion. Because, once you start to think it through, what it really would mean if AI succeeded, and then all the other technological developments that would follow in the wake of machine superintelligence, because then this super-intelligent AI would be doing the further inventing and innovating and discovery at digital time scales.

And so, ultimately after you have superintelligence--and I think personally not that many years after you have superintelligence--you would have a condition, I believe, that starts to approximate technological maturity, where most of those technologies that we can see are in principle physically possible have been developed. Especially general-purpose technologies.

And then, it's not just that we have the current human conditions, but with a bunch of clever robots going around, getting our groceries delivered and so forth, but, a whole host of other things also become outsourceable to machines; and you're beginning to get closer to this condition of what I call a solved world, which is not just that the difficulties have been solved, but also almost like a condition where previous firm boundaries are starting to get dissolved.

So, you think about what it would mean, for example, if we had this kind of complete automation and technological maturity: It's not just that the AIs could do your economic labor, but a whole host of other things as well that people fill their time with when they don't have to work.

12:28

Russ Roberts: Now, it's interesting to think about one's own job. I used to find it rather marvelous that I liked my job. And I used to--as an economist, I would wonder in the history of human wellbeing, how many people enjoy their work? And I used to suggest that the proportion of the world that enjoys their work today is probably something close to an all-time high. Plenty of people don't. But your book forces me to think about: Why do I enjoy my work? I happen to be very well-paid. If I weren't well-paid, would I enjoy it as much as I do?

If people didn't read the essays and books I wrote, for whatever reason: because they didn't have to work and they didn't invest in whatever wisdom I hoped to provide--they didn't read my work, but I still enjoy the practice of setting my thoughts down on paper? And, as Bruce Yandle likes to say--the economist--'I love my work but if they didn't pay me, I wouldn't do it.'

And, yet there are many things we get paid to do--perhaps--that we enjoy sufficiently for themselves. Would that be true of you, Nicholas Bostrom?

Nicholas Bostrom: Um, yeah. I mean I enjoy my work, and especially the--not so much the part that was involving dealing with the university bureaucracy. I think that part I could happily have elided. But the other parts, yeah. But, it is possible that a lot of the reason for that is that I feel it's worthwhile that I'm accomplishing something with the work.

In other words, the activity has a structure where you do something, X--putting in effort, concentration--in order to achieve something else, Y, that's separate from the activity itself.

So, like, you write a book, and then the result is there now exists this book written by you that hopefully can be enjoyed by other people and maybe effect some positive change in the world.

And also, you yourself, maybe, by putting in this effort grow intellectually as well through the work you did.

But so, there's this outcome separate from the activity. And the fact that the activity has this outcome, this causal effect, might contribute to your sense that the activity was worth doing and probably to your psychological satisfaction.

But, if we now consider this same scenario in this condition of technological maturity, it is no longer clear for most activities that this for the sake of which would still be there.

So, you can think through the different types of activities that people build their day. Like, some people enjoy going shopping--I don't quite understand them, but that's a common right. So, then that exercises various kinds of human faculties. You have to remember where the good things were. You have to evaluate different options. Maybe you go with a friend and you can develop an understanding of how the other person thinks, etc., etc.

But, and imagine if there were instead some recommender system that could give you--that could find for you something that would actually fit you better than what you would discover if you went and looked for it on your own: whether it's like some object for your house or some piece of clothing. And, it could also just buy it automatically without swinging it by you if it were sufficiently good.

So then, you would have this situation where, yes, you could perhaps still go shopping, but you know the whole time that the only result of this is that you end up with something worse than what you would have gotten if you had just not bothered: You know, pay more. It's worse. It doesn't fit as well.

Now, in that scenario, it is possible that the kind of appeal of the shopping activity would start to come off a little bit. It would seem perhaps a bit pointless.

And similarly for other activities that we tend to fill our leisure time with: for many of them, they have this same structure of you do X for the sake of Y.

So, you might think: Well, right now, even if you didn't have to work for a living, maybe you would pull yourself over to the gym a couple of times a week because you want to remain healthy and fit and you can't hire a robot to do the StairMaster on your behalf. Like, that's something you have to do yourself. But the technological maturity, you could pop a pill that would induce the same physiological effects and also the same psychological effect after a good workout, like the relaxed, energized, calm, focus that some people enjoy.

And so, that too, like--

Russ Roberts: Allegedly--

Nicholas Bostrom: we have points on struggling in the gym for an hour if you could just have popped a pill and had this actually the same effect.

So, you can go through and start to cross out the activities that fill our leisure, or at least put the question mark on top of them.

And, a lot, I think, would be crossed out or questioned. And, we have then a kind of post-instrumental condition. Not just a post-work condition, but a condition in which it seems like all instrumental effort becomes odious.

17:57

Russ Roberts: I've mentioned my granddaughter a number of times. She's almost two. It's extraordinary to watch her growth. I get to spend a few hours with her every week; and I like to think I'm part of her maturation, her growth in consciousness, her linguistic ability, her passion for owls--you name it. I'm part of her life.

And, Eric Hoel, a past EconTalk guest, had a post recently on a Substack saying, 'You can teach a two-year-old to read.' It's not easy. Maybe not every two-year-old can read. But it can be done. And, he has a nice, thoughtful essay on whether that's a good idea, a bad idea. And, I have to confess that I think my granddaughter is very bright. She might actually be, I don't really know.

But, it's an interesting thought I had: Could I teach her to read in a few months of diligent phonetic stimulation? I showed her the letter P. I explained 'papa' has the sound of a P, etc., etc.

And, again, it's incredibly fun, by the way--unbelievably fun, exhilarating. And, you talk about parenting as one of the things that might be something people would focus on in a world without technological limits or without material shortages.

And then, I think: 'Why do I really want to teach her to read?' Is the goal of this kind of--I mean is in the back of my mind, even though it's probably deep below my consciousness, some kind of instrumental goal here that is not actually the joy of just teaching her to read? I don't know.

Nicholas Bostrom: Yeah. I think these instrumental reasons are so suffused throughout our whole motivation system and psychology that it is quite hard even to imagine what it would be like if all of these instrumental reasons were removed. It's like, you know, a little bug, like, has an exoskeleton that, like, the hard shell basically that holds all the squishy parts together.

And, I think analogously, we humans, our souls, have this kind of exoskeleton of instrumental reasons that has always been there throughout our evolutionary history. Right? We've grown up with all this instrumental reasons: We have to do this for the sake of that. That, if we remove all of that, it's like a question of whether we'd just become amorphous blobs or whether we would still retain some kind of structure. Human lives, would they still have shape after we no longer need to do anything?

And so, we come now to yet another layer of the onion where we've kind of seen how at technological maturity you would have this post-instrumentality.

But, there's one more property I think that would follow if you really think what it would mean to have technological maturity, which is 'plasticity,' I call it. But, basically the idea that we ourselves, the human biological organism, our brains and our psychology becomes malleable.

We would have the tools needed for reshaping ourselves according to our wishes--reshaping our bodies, but also our minds and placing ourselves in whichever psychological conditions we preferred. Like, a crude way might--you could imagine some super-drug without side effects and without addiction potential and a wide range of different drugs that had very precise, tailored effects that you just pick and choose in a moment-to-moment basis, what your emotional state would be.

But more than that, you could imagine direct brain manipulation technologies and presumably uploading into computers where you could directly edit our neuronal connectivity matrix, etc.

And so, this removes yet some additional reasons for doing stuff. So you might say, 'Well, the reason I'm going to the gym is to remain healthy. The reason to brush my teeth is so they don't get tooth decay.' And, these things would remain; but then you think, 'Well, no; you could take a pill to do that.'

But then, what about other things people do? Maybe people play golf because it's fun. Or, they study mathematics so that they will then know mathematics.

But, here with this condition of plasticity, you see that those also would go away. Like, if the only reason why you're playing golf is to induce a subjective state of pleasure and well-being and joy, then that would be a shortcut with the pleasure drug. And similarly, if the only reason for spending hours poring over the mathematics textbook and doing exercises is so that your brain someplace down the road is in a position where it understands algebra and calculus, etc., there[?] would be a shortcut, again, like some direct brain manipulation that would rewire your brain into a condition where it has these skills and concepts without the preceding effort.

And so, then you really end up in this quite radical condition of what I call the solved world, where you really have to think about which things are worth doing, not for the sake of something else, but because you regard them as intrinsically worthwhile.

23:53

Russ Roberts: This does sound a little bit pessimistic. Listeners, I will assure you there's some cheerier thoughts coming from our guest. But I want to say one thing about the shopping example or any of these examples, which is: Would I want the thrill of the hunt, to find the item that I would ferret out myself even though it wouldn't be as good as the one I could, say, find on the Internet using this amazing app.

And, that's an open question. But it's obvious that, at least today, you would much prefer, I think, that I bake you a loaf of bread, Nicholas, that was inferior in some dimension--up to a point--to one I could buy in a store: the fact that I crafted it, that I invested the time for you. So, we can imagine these bespoke activities that enhance our human connection that might persist, I think, in a world as they persist now, despite our--I don't think it's irrational to bake bread even though it's cheaper at the bakery down the street or even cheaper and higher quality. Sometimes it's fun to do things oneself.

But of course, you could imagine that there's a pill I could add to the bread from the store that would make you--I don't know, feel like it's homemade or that you would like it just as much? I am not quite sure. Do you want to rule out that personal touch aspect of these activities?

Nicholas Bostrom: No. No. I think we're getting towards something. I mean it's worth, again, making the distinction there between you might bake bread because you enjoy it. Then if the only reason for doing it is the enjoyment in the sense of subjective positive affect, then there would be a shortcut to that. It would seem, like, unnecessary to get the whole kitchen dirty when you could just have pressed a button and felt exactly the same amount of enjoyment.

But, the idea that you're pointing to with the doing things for the sake of others who might just value it because you did it--I think this is an important potential source of purpose in a solved world.

So, the structure of the kind of inquiry in the book is a little bit like first tearing things down, breaking things into parts, and then seeing what we can rebuild from these basic building blocks of human values.

So, at one point, yes, we do confront this sense of dissolution as it were--like, everything we normally just assume to be worthwhile and nice, there seems to be no point in doing it. And, it seems quite drastically unappealing this condition that we would reach in a solved world, like a meaningless existence where we're just floating through.

But, I think that rather than trying to paper over that or trying to hide it, I would rather: let's actually dive into that and really think through the full implications of that.

And then, we can, on the other side of that, try to conceive of an existence that would actually make sense even in these[?] radically transformed condition. And, I'm somewhat optimistic about there actually being something, but it would be slightly maybe quite profoundly different from our accustomed human existences.

27:27

Russ Roberts: I was pretty pessimistic before I read your book. I'm much more optimistic, which is rare that a book can have that big effect; so I salute you.

Before we move on to some of those more optimistic things, I just want to mention Keynes's famous essay, "Economic Possibilities for our Grandchildren," which he wrote in 1930, which is--one of the things that's interesting about it: It's not a particularly good economic time, 1930; but he wisely foresaw that the standard of living would multiply quite dramatically over the next a hundred years, or close to 2030.

He expected that that would lead to a lot less work. It's led to a little less work since then per hour--I mean per week, per lifetime. So, he's on the right track. But it's not a solved world, this world we live in now. And, a lot, I think, of what he--he imagined a world where we could enjoy the pleasure of the moment without having to save, say, or postpone satisfaction.

It's a very interesting psychological essay. I would add, as I have in the past, that it has a anti-Semitic twist embedded in it. But, put that to the side. He didn't quite get it right, and he imagined a very different--he didn't have your imagination, I'll just say that. Do you agree? Is that a fair summary of Keynes's essay, the difference between you[?] and your book?

Nicholas Bostrom: Well, I mean, I think maybe at the time it was maybe quite a radical, imaginative leap over the status quo at that time. And now it's easier to imagine this future now, because we're much closer to it. We can see AI unfolding almost week by week, and there have been a lot of work done between his time and now.

I mean it is striking that for a lot of people who are relatively--if you grew up in a wealthy country, you're reasonably talented and well-educated, I think it would be perfectly feasible for most people to work really hard for 10 years, and then in effect retire and go and live in some low-cost country--in Bali--and have nice food and a nice little, maybe not a big house, but you could have a nice little cottage by the beach or you could probably have a condition that would meet all your basic biological and psychological--but most people prefer to keep running in the rat mill.

And, I think this is maybe what Keynes underestimated. I think greed has triumphed over sloth. We do work less, but a lot of the extra productivity we have instead chosen to use for extra consumption--much of which is positional in nature, like, trying to get the luxury articles that make us have higher status than the other people. So it's like a zero-sum element to some of this consumption.

But, be that as it may, like, I think you could reach this condition that I'm describing not by assuming that all human desires would be satisfied--because quite possibly at least many people's desires are unlimited--but that you would have a situation where even if a lot of people would want even more--maybe they all have trillions of dollars because you have this super-robotic economy--some people would want to have a quadrillion dollars and more, but they couldn't actually increase their income significantly by working themselves. Just as if you have, say, a billionaire today who have absolutely no skills--like, he could work in McDonald's, but it would make $5 an hour--it's trivial compared to the capital gains he's making from--

So, similarly in this situation, we would be making all our income from either the capital we own or from social transfers with governments paying UBI or other sources rather than through direct labor. With a few exceptions that we haven't touched on, but I don't think they're super-central.

31:49

Russ Roberts: Before we leave plasticity or flexibility behind, I just want to give you a chance to comment on a recent theme--a surprising theme of this program--which is Robert Nozick's Experience Machine: the idea that you could simulate certain emotional well-beings and experiences mechanically hooked up to this machine, when in fact you'd just be lying on a table; but it would feel like you've cured cancer, you're a great rock star, and so on.

And, I've remarked--and I'm curious if you agree--it seems to me that in today's world, more people are amenable to that idea than they were when Nozick proposed it. And, the idea that you could change your nature, say--or your urges, or whatever aspect of your psychology--with a pill or some other device, as opposed to doing the hard work of self-improvement--which I have devoted my life to, Nicholas; can't speak for you--but, you know, the trivial improvements I've made in my own character over 69 years would be dwarfed by the pill that you're talking about, or the Experience Machine: I'm just pretending that I have those changes.

Are we heading--is that where we're headed, toward a real world of experience machines a la Nozick? And, is that imaginably a meaningful world to you?

Nicholas Bostrom: Yeah. Well, yeah. I think the option of the Experience Machine will become available. It's technological maturity. We [?kind of?] have super-crude versions of this where you could spend your time watching movies and stuff, or reading even well-written immersive novels that create this illusory world that you could temporarily inhabit, or even just some ideological echo chamber online where you have a lot of other people who just affirm that your views are great and everybody else is stupid. And, for a lot of people that seems like a compelling option--

Russ Roberts: Utopia.

Nicholas Bostrom: And these will become vastly more powerful as we have first fully-immersive virtual realities. But that's even just, like, the outset. There's only so much you can do by manipulating the sensory input to the human brain. If you really want to have reliable control of your experiences, you need direct access to the brain itself. Right? So, you could change your degree of happiness and your emotions and then maybe upgrade various faculties and transform your character, etc.

Yeah. So, that I think will become possible. And, that's in the good scenario. Like, in the bad scenario is we don't even get to this point because we destroy ourselves in some other way.

But, the idea that the current human condition, in more or less its present form, would just be continuing on for the next 2 million years and then eventually--a) it seems very unrealistic to me that that would be the case. And, b) I think probably not even that desirable. I mean, we've been doing this for 10,000 years. How much more of that? If there is a chance of unlocking the next level of existence, I think at some point, considering all the horrors that are easy to overlook when we're sitting in our nice air-conditioned place, having an intellectual conversation. But, if you look around in the world, it's quite a horror show in many ways for both a lot of humans and for our animal friends, of course, etc.

But, anyway, those are kind of practicalities, perhaps.

But, if we consider this question of what could we reconstitute if we end up in this solved world--like, what values could be salvaged?

So, I think we have, first, this very basic notion of pleasure and subjective well-being, which is very easy to sniff at[?] if you are, like, an intellectual, you pride yourself of being important in running [?] in the world and doing things. And, this is the kind of thing that those people who failed out and became, like, drug addicts, that's what they--like, that's clearly not the path.

I think actually it is a much more serious alternative than a lot of people--I think we'd be dismissive too readily without even knowing what the heck we are talking about, not having experienced the kinds of bliss that would be possible.

And so--now, I think, ultimately we could have more than that. But it's not the mere pleasure, I think. It basically would be such that if even, one, they had a chance to sample the tiniest bit, you would just realize, 'Wow. Wow. I didn't realize that's, like, just--yeah. Of course, I like that. That's, like, the way to go.'

And, we would see our current opinions about that as bigoted, prejudiced, completely ignorant of what we were talking about. But, it's a kind of an intellectually very simple point to make. Yeah. Of course, if you had complete access to the brain, you could crank up the pleasure. So, even though it's very important, I think, it's intellectually not very rewarding to spend a lot of pages talking about it because the point is easily made..

37:32

Russ Roberts: Can we pause there for a sec?

Nicholas Bostrom: Yeah.

Russ Roberts: Can I ask you a question? I'm reminded of Izaak Walton's line when he talked about strawberries. He said, 'Doubtless God could have made a better berry. Doubtless God never did'--Ever did? Never did? You understand what I mean,--'I can't imagine a better berry. But of course, we could. It could be sweeter or more lasting flavor. The texture could be perfect every time.

But, one of the things I worry about--you know, the dystopian view of, I think, the future is Brave New World where people sit around taking this drug soma and medicating themselves. It's hard, at least at our current level of humanity to imagine persistent ecstasy. Maybe cocaine users would disagree with me. It doesn't get boring; but, the finest meal once a year to celebrate your anniversary is magical. Every night, it's not as magical. But, I think you're trying to say something but different about pleasure.

Nicholas Bostrom: Yeah. Well, so right now we are designed in a way that is quite stingy with reward. We work hard, like a little thing, and then we have a little achievement and a few drops of reward that goes into our nucleus accumbens or something. I mean we are obviously not engineered for sustainable bliss.

And so, I think that means in particular that our intuitions of what it'll actually be like if we did have that capacity for sustainable bliss are off.

It may be it's easier to see the plausibility of this if you consider the other side, where there certainly are people who are sustainably miserable and just horribly depressed for years at an end, like with anxiety and sadness and pain. And so, there's no particular reason to think that it would be any more metaphysically impossible to have a consistent state on the upside of that. It's just rarer given the ways our brains are designed right now, and we don't have the technological means to really have much fine-grained control over that. But, that would be presumably one thing.

You would want to recalibrate the set point for boredom and habituation to joy in this condition so that you could be more, like, spend a greater fraction of your time actually enjoying life as opposed to just dragging your sorry ass through life and putting up with it, which is--yeah.

So, that's pleasure, which is really important. And, without it, really all the rest seems rather probably not what you would want to do for thousands of years if you didn't even enjoy it. But, if you do have that in place, we can certainly add elements here.

So, one thing we could add is experience texture. So, right now you think of, like, a drug, just pure pleasure but nothing else: like, just a kind of dazed consciousness.

Well, why not attach the pleasure to, say, aesthetic perception of true beauty or scientific insight or genuine understanding of another person--like, to some kind of other experiential content that adds richness and complexity to your psychological state, not just as you have a positive hedonic tone, but [?the sort of?] informational content is also focused on some worthwhile object or contemplating God. But it doesn't have to be just pressure with some sort of dumb state of stupor.

It could be that same thing. Maybe something closer to what, you know, Archimedes experienced when he figured out Archimedes' Law. Like the flash of insight that is very joyous, but at the same time like a kind of brilliant glimpse into the true nature of things.

So that's another element that already starts to look a lot better. If you imagined these spirits being ecstatically happy while contemplating beauty and truth and goodness, that already seems better than the junkie on the dirty, flea-infested mattress.

But, I think we can add further to that.

So, we could, for example, add purpose. We could have artificial purpose if you think that we also want to have activity and effort as opposed to just passive experience happening to us that we would be doing things.

So, games are kind of maybe the most familiar example of this where you create artificial purpose by setting yourself a goal, and then you have an activity of trying to achieve it.

And, even if the goal itself is kind of arbitrary, like in golf--like, you don't really need to have the ball travel through the holes in sequence using only a golf club to propel it. But, you can set yourself that goal, and then once you've done that, then you have this activity that you can engage in and exercise your skill, and then have experiences and pleasure maybe whilst doing that.

So, nothing would prevent us from creating these artificial purposes at technological maturity.

And so, you could, like, have all kinds of wonderful and complex games that we haven't had the time to develop.

And now, then you might ask, 'Well, that's all nice, but what about real purpose? Like, kind of the things you actually need to do, not just some arbitrary goal you set yourself?

So, there we can come back to the point that you were making before is that sometimes we do things for the sake of others.

Now, a certain subset of those reasons that we currently have to do things for others would go away, because, like, then they wouldn't need our help in a lot of practical ways that they do now.

But, if somebody just happens to have a [role?raw?] preference that you should be doing a certain thing yourself, and they just happen to value that--for whatever reason they happen to value that--then, if you happen to care about them getting what they want, then the only way that you can achieve that, even in technological maturity is by you yourself doing that thing that they want you to do.

And so, you would then have a real preference[?}. Like, then you only way you can achieve your goal of satisfying their preference is by actually putting in this effort yourself.

Now, in this kind of simplified, elementary, binary example, it sort of--you know, looks a bit, sort of uninspiring and hokey, kind of. But I think more subtle forms of that are actually quite pervasive: where, we as a tradition happen to have developed an economic attachment to certain forms of practice. And, the only way to honor, say, these traditions is by, you know, in ourselves participating in those forms of practice.

And, there are many more subtle ways in which I think we have these social/cultural entanglements.

Similarly, with, like, worship, for example--might require us to be doing the worshiping rather than, like, gazillion androids who, like, perform.

And one might more broadly conceive of our lives as also being kind of artworks, almost. Like, in this condition where our own participation is kind of a key element of the idea that is being expressed by this artwork, and certain forms of beauty might then require our own agency.

So, I think there could also be these forms of natural purpose surviving. Like, honoring your ancestors might be another obvious instance of that.

46:00

Russ Roberts: And there's a certain irony here, it seems to me. I recently was talking to Paul Bloom on the program about the opportunity to create an avatar or a robot that would mimic a loved one who had passed away. Or, instead of having dinner with five annoying friends who I can pretty much tolerate--I could have dinner with David Hume, and Adam Smith, and fill-in-the-blank--whoever would be this remarkable dinner party.

I might feel weird eating with those five people because I'd actually be alone with those five avatars. And, you and I understand that in the flesh-and-blood world that we live in, we still care about other human beings. There might be some unease about living in a fully virtual world--conversation with dead loved ones, say.

And, the irony is that we could then, in your imagination, change our chemistry--hormones, our brain, our synapses--to stop caring about that. The guilt I might feel that I'm not actually out with real human beings, or putting in the work it would take for you and I to bond over a meal.

And, it seems to me there's a terrible irony here, that, if this technology really did evolve in that direction, I'd be heading toward being more of a machine and less of what, at least now we think of as a human being: flawed, imperfect, suffering, at least from time to time.

And--again, as much as I like your book, this doesn't sound like a world--I'm not sure this is a world I want to live in, and maybe it's because I'm burdened--

Nicholas Bostrom: I mean I don't know why--I mean, you certainly could in this condition--choose[?] solipsism and change your brain so that you never felt lonely, etc., and live in your pod alone but perfectly contented, exploring virtual worlds.

I mean, you wouldn't have to, and maybe that would be a sad thing to do from the perspective of realizing as many human values as possible. It seems you could do much better than that by being in communication and interaction with other people in this world-- in communities with friends and family. I mean, why not?

Russ Roberts: Well, I think it's a cliché that young people today struggle to interact with each other. They're so used to interacting with their phone, with texting, and with social media that they struggle to interact successfully at the party or at a social gathering. The irony for me is that we don't teach our children generally how to talk or how to love.

The irony for me is that we don't teach our children generally how to talk or how to love. We might teach them how to share and a variety of things like that. But it's taken me a lot--I have wonderful parents, not complaining, but it's taken me a lifetime to appreciate this--me talking to you, my wife and I having dinner together alone, and so on, and the sacrifices involved. And, it's--maybe those things can't be technologically fixed easily. I don't know. A challenge. A challenge of real human interaction.

Nicholas Bostrom: Yeah. I mean we already see some of them like just with the alternatives. A few hundred years ago, the only entertainment was chatting with the other people in the house. Maybe you could read a book. And, now the kids can watch cartoons on television or play computer games or text with their friends instead of hanging out with the old folks.

And so, there are certainly, even just from present technology: yes, in some ways, it can help us connect more, but at the same time also creates more diversions from spending our time with each other. And so, it's a mixed bag in that respect. But, yeah, and I think that would be even more the case as the technology advances.

And, I think actually one of the first places where this will kind of hit is the social companion robots that--even just current AI models, and especially if you then add a voice and really good visual avatars, and then maybe with a reinforcement learning loop that trains them to be maximally engaging so that people don't log out. I think quite possibly that might become quite compelling to many people to spend their time interacting with these virtual imaginary friends, as it were, rather than us flawed real world people out there.

And, yeah, one might feel in different ways about that. I don't pretend to have the--yeah--I think the conventional answer would be, 'Oh yeah, that's really bad because it's the real human interaction is better.' Maybe. Maybe that's true. But maybe the next generation will think, 'These old people, they didn't really catch onto it. They were too stuck in their old ways and they don't realize how much nicer life is now when 90% of it is just interacting with these hyper-optimized virtual personas.'

It's really hard with a lot of these things to have an independent vantage point. I mean, it's easy to express a lot of opinions about it, but it's hard to make those opinions mean anything more than just a reflection of your old idiosyncratic personality or your upbringing or the books or teachers that happen to influence you. How do you actually get more down to ground truth on these matters? It's really hard and [?] possible at all.

Russ Roberts: That's all right. It's fun to think about it. You're really provocative in this book, in all kinds of good ways.

52:28

Russ Roberts: I don't want to miss a chance to mention that you have a fable woven throughout the book, which suggests that the entire discipline of philosophy, or maybe even you could argue--maybe you're trying to say that the whole enlightenment project is something of a fool's errand. How little progress we've made in understanding ourselves and how we might alleviate the suffering of ourselves and others; and that all we're really good at is this technological game called technology and innovation.

But, that's such a small part ultimately of the human experience. It's not unimportant if you're starving, obviously, but for many, many people it's not where it's at. There are so many other things that make life challenging and hard. Is that a fair summary? Is that--of the fable--and is that what you believe?

Nicholas Bostrom: You're referring to the [?inaudible 00:53:29?] Fyodor the Fox?

Russ Roberts: Yes.

Nicholas Bostrom: Yeah.

Russ Roberts: It's quite clever and quite entertaining.

Nicholas Bostrom: Yeah. So, for the listener, there's this Fyodor the Fox, he has a friend, a pig friend, and they try to make a positive contribution. No, I think we haven't really seen anything yet compared to what impact technology will have on the human experience, and these much more thorough and direct ways of changing human nature and human life and all aspects of our lives that will become possible with mature technology.

But, I think the upshot of that is that a lot more of these things become matters of choice.

So, right now, there have been a whole host of givens. If you are a human, you're born, you die, you spend some time in childhood, and then you have to make a living. And then, there are other people, you have to interact with them. You have no choice. Some will be your boss, some will be your friend, some will be your enemy, some will be your mate. And then you have your own human psychology, which you will, like, want to eat when you're hungry, and it hurts when you scrape your knee, and you want the respect--just a whole bunch of these. And then, within those parameters, you can define a little niche for yourself perhaps, and adopt some attitude to these givens. That's, like, your personality and your life philosophy.

But, a lot of these givens, I think become variables at technological maturity. Meaning that they could take on different values depending on what choices either you make yourself--if you have a future with a lot of individual freedom, or the choices that society makes if you have that kind of future.

But, either way, at some point, somebody--whether it's a few people in an AI lab or a government, or maybe some humanity-wide deliberative process--some group of people will have to form some opinion about what future we actually want if this technology happens.

And then, these are the questions we confront. And, I think the answer are not obvious, although they might appear erroneously to some people as obvious. Like, a lot of people might come with strong views and strong reactions. But, the more one thinks about them--at least that's been my experience--that the more one feels it's just really hard to figure out what's up and what's down in this radically expanded space.

Russ Roberts: Yeah. And, your book opens one's mind to that, which I loved.

56:22

Russ Roberts: I mentioned earlier the idea that things would change--norms, other things that you mentioned in passing in our conversation--that education is basically, in our current world, is structured toward productivity.

And, I think that's correct. It's hard to think of what education would be in a solved world. Right?

And, you spend some time on that, I think it's a beautiful idea to imagine what that might be, because, since we're not in that solved world yet, there might be some pieces of that--there used to be, anyway--that might be worth preserving even a world where people would like to be productive.

And, yet so much of our culture pushes us toward material success, and the idea that being able to enjoy leisure in a thoughtful way might be a good thing to prepare people for when they're young is controversial.

Nicholas Bostrom: I think it makes a lot of sense. I mean, especially if we're not thinking timelines might be quite short.

I mean, if you have a young child, I would, like, want to hedge my bets, not assuming that they will never have to work because we don't know how long this will take.

But, it also means on the other side, like, it could well be that you have a 2-year-old, you said, by the time--like, 20 years from now, let's say--when they are supposed to start to make a living, I mean roughly speaking. I mean a lot--maybe they will never need to make a living. Who knows? Or even if they have to for 10 years, maybe it's like human labor becomes obsolete, you know, mid-career or something.

So, I mean, I would also like to some degree try to optimize for actually enjoying yourself while you're a child.

Russ Roberts: Good idea.

Nicholas Bostrom: Yeah. Yeah. That's one thing they have over us, adults, in many cases: They are really good at having fun and playing and being creative and fascinated with the world and making up, creating stuff.

And, yeah: maybe we could all learn some from how children live in this workless condition. And we might all become a little bit more like children in this future.

Including, incidentally, biologically. So, right now we have this view that you're born, then you develop and grow for 20 years, let's say. And then you sort of stop developing--

Russ Roberts: It's over--

Nicholas Bostrom: Right. And then, you hold[?] it, and then just as you have maybe begun to acquire a modicum of wisdom, it's too late. You don't have any time to use it because your body is falling apart.

Russ Roberts: Yeah.

Nicholas Bostrom: Then you get dementia and you die.

But, imagine if you could continue to grow, like, not just for 20 years, but, you know, your brain continued to grow and your body became stronger and even more healthy and you attain new levels of understanding while, you know, becoming more capable for, like, several hundred years.

Like, I think maybe then people would think that right now it was a horror. Like, infant mortality was not 5% or whatever it is, but 100%. Like, we all died in our infancy. And it was a big tragedy that maybe before 2050[?] nobody ever got a chance to grow up.

Russ Roberts: Yeah. Amazing.

59:41

Russ Roberts: You have a page in the book which I love. I'm not going to read all of it. I'm tempted to, but I won't. It's called, 'What to Do When There's Nothing to Do.' It's a full page. I'm going to read a little less than a quarter of the page. So, here's what to do when there's nothing to do:

Building sandcastles, going to the gym, reading in bed, taking a walk with your spouse or a friend, doing some gardening, participating in folk dance, resting in the sun, practicing an instrument, playing a game of bridge, climbing a rock wall, playing beach volleyball, golfing, bird watching, watching a TV series, cooking dinner for friends, going out on the town partying, redecorating the house, building a tree house with children, knitting, painting a landscape, learning mathematics, traveling, participating in historical reenactments, writing a diary, gossiping about acquaintances, looking at famous people, windsurfing, taking a bath, praying, playing computer games, visiting the grave of an ancestor, taking a dog for a walk.

And, it goes on. That's not even a third of the page. It's a really beautiful list that makes you realize that maybe it'll be okay.

Nicholas Bostrom: Let's hope so. Yeah. I think so. I think it definitely could be. And so, it's worth trying to make sure we actually get there.

1:00:57

Russ Roberts: In many ways your book is an exploration of what is a life well-led. Again, and somewhat ironically given the Utopian aspect to it. I take your answer to be a beautiful one to what is a life well-led. Your answer I take to be that the answer to that question is not easily written down.

There aren't simple rules. We each explore this question if we choose to do so, and it should not surprise us that we come up with different answers.

But, I also take you to be saying that this challenge of a life well-led is maybe just too darn hard, at least in our current very finite lifetimes. Is that a accurate summary of your view?

Nicholas Bostrom: Yeah. So, the question of what's a life well-led for a human living under present conditions is already quite hard. And then, I think it becomes intellectually much harder if we have to think about it in this radically transformed condition of a solved world.

I think if we are asking the former question for a human under current conditions, we can maybe not answer it, but we could maybe imagine some life that scored high in several different dimensions and we think that's seems like a pretty good life. If somebody lives a long and healthy life and raise children and grandchildren that thrive and prosper and they accomplish some great scientific discovery that really helps other people alleviate suffering through, and then they participate in community activities to overcome some great evil and achieve reconciliation at the end. And then, at the end of their life, maybe they write their memoirs and they're really beautiful, and many people feel inspired to become their true selves as a result of reading this.

And also, they had time to just kick back every once in a while and take in beautiful sunsets and appreciate music. Like, if you just cram it full of these different candidates, I think at the end that seems probably a pretty good human life for what a human life could be. And, obviously [?you? I?] would add the religious dimension to that as well.

But, yeah. [?] Like if one pushes[?] that question further, it almost become a little bit like somebody asking for life advice.

Sometimes people do ask me this in interviews, like, 'What advice do you have for young people?' or 'for people in general?' It seems a kind of impossible question, as if there were one advice that would be right for everybody. I think some are too far in one direction and need to be advised--maybe they need to discipline themselves and pull themselves together. Others are too much in that direction, they need to be talked to not be so hard on themselves.

And so, I think it can be hard to provide general useful insights that apply to everybody. And, I think some of these questions also risk veering into that general purpose advice for what's the best shoe size or something.

Russ Roberts: Yeah. That was a very charming part of the book where you wrote about that. I'm often fascinated by what my students will tell me is their favorite thing or the most interesting thing they learned in my class, and it's usually often, sometimes not what I would have picked, but that's life as a teacher.

1:04:44

Russ Roberts: In our previous conversation, we had an exchange I have always remembered. It may not have been noticeable to you, but I raised the question in our 2014 interview whether anyone had mentioned to you that your conception of artificial intelligence was similar to a medieval perception of God--omniscient, able to do anything it wanted.

I found that a fascinating aspect of that book. And, I feel that way about this book as well. I don't know much about Christianity, in Judaism, a Messianic Age is one where we don't sit in heaven listening to harp music. I don't know if that's a fair summary of Christian theology. But in Judaism, there are some visions of the messianic future when the world is redeemed and suffering is over, but we still inhabit our bodies.

And, what your book reminds me--I just want you to react to this because it's, for, me so interesting--is that whether God exists or not, whether God is dead or not, we increasingly play the role of God. We do the things that religion once thought God did. We cure the sick. We raise the dead back to life.

And, here you've conjured up an incredible vision of a utopia that is not dissimilar from a certain messianic vision at the end of days, except that we're bringing it without the help of a supernatural or divine entity. Have you thought about that at all?

Nicholas Bostrom: Yeah. I think there are these structural parallels that with AI, with this utopia stuff. I mean some of my earlier were on the simulation argument, even more obvious there. But, I do feel also that there are questions that are above my pay grade. [More to come, 1:06:55]

In the book, this fictionalized version of Professor Bostrom is deferring to another Professor Ghostwriter[?Grosswriter?], who is not actually appearing in the book, but whenever these theological questions come up, he tells the students, 'You should ask Professor Ghostwriter[?Grosswriter?] about that.'

And, I'm tempted to pull the same trick here.

I mean, but honestly, I feel like a lot of this is--yeah. I think there are more things in heaven and on earth that then is dreamt of, certainly in my philosophy, but it's hard to know exactly what to make of that.

But, I think, like, it is, for better or worse, like, we do seem to be--like, humanity is heavy with child in some sort of way. If we give rise to this AI, super-intelligence stuff, then it does seem these things will follow. And, I guess we have some kind of role or responsibility in trying to shape something sensible out of that. So, this is my effort to try to start to think through what that might mean.

But, I am kind of--it's quite possible that I am, and maybe we will all be missing some crucial part of the whole equation here and that we will make a big mess of it.

That seems like kind of what you would expect of the human species to do. Like, especially if we get our hands on this completely unprecedented technology--that's, like, way more powerful.

And then, not even a trial error: This is how we sometimes manage to get to some semi-adequate result.

We iterate, try every possible way of doing it wrong, and eventually through many hard knocks, kind of slowly converge, unstable it to some tolerable way of going about things.

But, this would be, like, a different order of challenge.

But that seems to be the challenge placed in front of us, and so we'll have to do our best, I guess.

1:09:17

Russ Roberts: Before we close, I want to ask you about an idea you put forward that I found quite novel. You argue that the modern world reduces enchantment. 'Enchantment' is a word you don't hear much in the modern world, maybe because we have a lot less of it, maybe because our language is a little bit too Spartan and spare.

But, I think you're right. I don't think we have enough enchantment in the world. And, I thought your observations of why that is, were very thoughtful. So, explain why you wrote that and how we might find more enchantment.

Nicholas Bostrom: Well, so it was having to put a word to a slightly amorphous, vague notion that I had. And, it's in the context of considering different kinds of subtle value that would be at risk of being lost in this condition of a solved world, and so that one can then analyze each one of them.

And so, there's, like, meaning and purpose and significance and interestingness and a bunch of others.

And so, one candidate's subtle value is, like, this idea of enchantment, which is the idea of a sort of high bandwidth way of interacting with reality.

So, maybe it's easiest to understand if we contrast it with the absence of enchantment.

So, imagine a world where your life consisted of, like, so you sit in front of a screen and a little task pops up--a little text describing some analytic task that you have to perform, maybe requiring a lot of creativity and drawing on your knowledge, and, like, a lot of intellectual skills would be required to solve to find the answer to this, let us say.

And then, you input the answer, and if you solve your task, you get a little reward palette[?} that gives you all the nutrients you want and some predefined quantity of pleasure. And so that's now your life. So, we can say what seems to be missing in this life? Like, you would have a purpose. You really need these reward palettes because otherwise you starve.

It wouldn't be the life of a junkie, never doing anything. It would be really using the full range of your intellectual--but there nevertheless seems to be something missing. And, I mean in fact, there are many things missing.

But one is that it seems in this scenario, in this thought experiment, you are interacting with reality as it were through a straw--like, there's a very thin connection between what you're doing and the effects of reality. It all goes through this little narrow interface on the screen.

Whereas, in real life now, it's not just even the choices you make, but it's, like, how you make them--your facial expressions, your emotions while you're doing them--all of these things have effects on the world and shape how other people react back to you.

And so, you might think that the human life is richer if it calls upon us to use all the different modalities that we have, not just our abstract thinking, but, like, our self-control, our emotional apparatus, even our bodies. If all of that is involved in, like, getting on in life and achieving something, that might seem a more enchanted life.

And I think traditional forms--look, if you imagine[?] living where you thought that every little river had, like, a special god living in it, so then it's not just a matter of, as it is now, you go down to the water: is the water clean enough? Otherwise, you have to boil it before you drink it. It's like kind of a reductionist view of relating to the river; but if you thought there was a god living in it, maybe you would have to think in terms of the personality of this god, and are you approaching the river with the right reverence, and how are you talking about it? The river might make a difference to the quality of the water you get from it. And so, to some extent, our existence has already been robbed of some of this enchantment, I think, and in some of these future scenarios, the remainder could also go and that might be a loss of a certain value.

Russ Roberts: I guess a lot of the time we'll be watching, in the future we'll be watching Midsummer Night's Dream, but with the greatest actors and actresses that we could possibly--which we can't even imagine. And we could be enchanted every evening with aesthetic beauty, as you mention in the book, and today is an example.

Last question. Are you looking forward to the future?

Nicholas Bostrom: Yeah. So, am I an optimist or a pessimist? So, this is like a competition[?]. So, I used to say that I'm a fretful optimist. So, I'm optimistic, but worried that we might not actually realize the optimistic outcome.

Then, I took to saying I'm a moderate fatalist. Meaning that a lot of the uncertainty about how well things will go is quite plausibly baked into how hard the challenges that we will have to solve--as opposed to uncertainty about the degree to which we get our act together and make an inspired effort. But, moderate, because we could at least somewhat nudge the odds towards the better.

But then, I started thinking, like, I don't even know whether I'm optimistic about the present.

It's actually pretty hard to try to form some all-things-considered assessment of the planet today, you know, with all the wonderful things that exist and children smile and, like, all the beauty, etc. But then, you put on the other side of the scale, the people in the cancer wards and the pigs in the animal factories, and just mundane stuff like the headaches and the tedium at the dead-end job that you don't enjoy, etc. And, you try to sum that up, it seems like the scales basically break when you try to do that. So, it's a hard question for humans, I think, to try to answer.

Russ Roberts: My guest today has been Nicholas Bostrom. His book is Deep Utopia. Nicholas, thanks for being part of EconTalk.

Nicholas Bostrom: Good to be with you, Russ.