What if one of the most dangerous myths of our time is not that AI will replace us, but that it is even a thing at all?
This season of the Beauty at Work podcast is about the beauty and burdens of innovation: how new technologies expand our horizons, but also erode our attention, agency, and sense of meaning. AI is arguably the most dazzling innovation of our time, but it’s also one of the most spiritually charged. It promises salvation and tempts us toward idolatry. Why is this happening and how do we best move forward?
In this episode, I’m joined by three unusually incisive thinkers to help me answer these questions—Jaron Lanier, Glen Weyl, and Taylor Black.
Jaron Lanier coined the terms Virtual Reality and Mixed Reality and is widely regarded as a founding figure of the field. He has served as a leading critic of digital culture and social media. In 2018, Wired Magazine named him one of the 25 most influential people in technology of the previous 25 years. Time Magazine named him one of the 100 most influential people in the world. Jaron is currently the Prime Unifying Scientist at Microsoft’s Office of the Chief Technology Officer, which spells out “Octopus”, in reference to his fascination with cephalopod neurology. He is also a musician and composer who has recently performed or recorded with Sara Bareilles, T Bone Burnett, Jon Batiste, Philip Glass, and others.
Glen Weyl is Founder and Research Lead at Microsoft Research’s Plural Technology Collaboratory, Co-Founder and Chair of the Plurality Institute, Co-Founder of Radical Exchange Foundation, and Co-Founder of the Faith, Family and Technology Network. He’s also co-author of two books, Radical Markets and Plurality.
Taylor Black has a background in philosophy, law and entrepreneurship, and is Director of AI and Venture Ecosystems at Microsoft, and the Founding Director of a new Institute on Artificial Intelligence and Emerging Technologies at the Catholic University of America.
Each of my guests is speaking solely in a personal capacity—their views are their own and do not represent Microsoft.
We discuss what kind of story we are telling ourselves about what AI is and what it’s for; why our language about AI so quickly slips into theology; and what it would mean to develop ways of thinking about and using AI systems in ways that are more relational and human.
Some of the key themes we cover are:
Why “AI” is as much an ideology as a technology
The buried origin story: cybernetics vs. “AI” as rival cultural metaphors
Why thinking of AI as a thing can lead to mystification, passivity, and despair. And why a better metaphor would be AI as collaboration: something closer to Wikipedia than to a new god
Can we make our technologies more like the Talmud (many voices, named and situated)?
Thinking more seriously about not just regulation but culture, meaning, and integration
How letting AI do our thinking is making us dumber—and how initiatives at the Vatican are helping propose better ways of designing and using AI systems
The impoverishment of our stories: what are the implications of the fact that so many tech builders imagine the future through Terminator and The Matrix?
You can listen to our conversation in two parts (here and here), watch the full video below, or read an unedited transcript that follows.
Brandon: All right, guys. Welcome to Beautiful Beards, I guess. Glen, sorry, you didn’t get the memo.
Glen: Oh, no.
Brandon: For our listeners, we got three bearded guys and Glen. No, welcome to Beauty At Work. We are exploring this season the beauty and burdens of innovation. I wanted to have the three of you all on this call because you’ve written and said some really insightful things about this topic, and I think it’s really crucial for us to explore that.
But before we jump into talking about innovation and AI and the beauty and burdens of AI, I want to ask you about beauty. Specifically, I want to have you all recount a memory of beauty—anything from your early lives, anything that comes to your mind. Is there a memory of a profound encounter with beauty that you recall? Perhaps, Taylor, I’ll start by asking you.
Taylor: Yeah, certainly. So when I think of beauty, of course, I think in the natural world. I was grateful enough to grow up in the Seattle area, so I had a lot of that growing up. But the thing actually that came to mind when you asked that question is actually Tolkien’s writing with regard to kind of almost immediate experience of beauty, particularly in the beginning of The Fellowship of the Ring, when he’s talking about the idyllic nature of the Shire in particular. In fact, reading that growing up, with all of the tangible examples around me of course of natural beauty, it kind of helped shape my worldview in such a way that I studied philosophy later in life in order to find an understanding of the world that was as rich as Tolkien’s writing about the natural beauty of the world. That’s my answer to that. There’re several different places in Tolkien where he talks about that rich beauty as mediated by language that I had already encountered out in the world.
Brandon: Wow. It’s interesting because there are some interesting tensions with technology and Tolkien’s own views. Maybe we can get into that. Glen, how about you? What strikes you? What comes to your mind?
Glen: I remember when I was in my early teens, I went to Berlin, to the Pergamon Museum, and I saw the Ishtar Gate of Babylon. I think what I remember most vividly about it was that I had gone to various historical sites and seen ancient things, but they had either been sort of ruins or sort of very imperfect recreations of various kinds. I think this is the first time that I came to grips with the notion that people in very, very distant times and places had sort of things of profound awe and encounters with awe that would touch me had I been there. And so I sort of felt like an empathetic connection to their sense of awe and beauty, that I really had never quite managed at that age to get through the imagination I’d had through other pathways. I think that definitely engaged me with history more profoundly.
Brandon: Wow. It seems to us to resonate with your work on plurality and that recognition of the diverse ways in which we can all be attuned to something beyond, right? Jaron, what memory comes to your mind?
Jaron: The one that came to my mind when you asked the question was the first time I heard William Byrd’s Motet: Ave Verum Corpus, which, if you’re not familiar with it, go listen to it. There’s a lot of words, so I’m not sure which one to recommend.
William Byrd was a composer who lived in London at the same time as Shakespeare. Although, apparently, they never met. But he was the other sort of renowned artist from that milieu. He was part of the underground Catholic scene. And so motets are chamber choir pieces designed to be soft enough that they won’t be heard by passersby. So it’s just six voices, not a whole choir. There was a school of Catholic composers at that time who just — I don’t know what was going on with them, but they achieved some kind of incredible synthesis of serenity with the stirrings of this western tendency to swell and build, to have a structure, not just a constancy — which was, to tell a story in music, is a particular thing that started to happen in Western classical music. It’s also just, I don’t know. You have to hear it. It’s the most luminous polyphony that’s ever been written.
Glen: William Byrd’s Motet, and what did you say after that, Jaron?
Jaron: It’s called Ave Verum Corpus. It just happens to be the text that the motet was set to. But give it a listen. Yeah, six parts.
Brandon: What struck you about it when you first heard it? What makes that resonate with you till today?
Jaron: So let’s say there are some types of spiritual music that are trying to — I say ‘trying’ because I don’t think anything human is ever perfect. Maybe nothing ever is ever perfect. They’re approaching some sort of serenity, some sort of still place that’s outside of time and process and yearnings. But then there’s another kind that’s very earthy. Like, oh, I don’t know, Yoruba ritual music or something. A lot of our Jewish music is like that.
What’s amazing about Ave Verum Corpus is it’s both, which is not something that you come upon that often. Like I say, it’s got this very human sense of swelling and yearning, and yet it also has an unmistakable calm center. Also, there’s a kind of purity. In western tradition, what we do is we combine structure that’s really unique to the west, which is things like polyphony. Particularly, we have multiple lines, multiple things going on at once that go together, and chord changes. All of that stuff is kind of the unique signatures of Western music. What it tends to do is, it pulls the music away from being perfectly in flow and perfectly in tune, because you have to reconcile these abstractions of structure with the musical flow. That’s our problem here in the West. I don’t think any piece of music has ever succeeded as well with that, until maybe some things in the jazz tradition. I’ll say there’s kind of interesting things in the jazz tradition that do it. But Ave Verum Corpus, check it out. It’s just wonderful. It’s short. It’s a radio-length piece.
Brandon: Right. Yeah. What strikes me is, I suppose, that kind of integration or maybe unity that you’re alluding to there, which is, of course, part of your title at Microsoft, which is Prime Unifying Scientist. I’m curious about this.
Jaron: Yeah, I think Glen came up with that. It’s long story. But yeah.
Glen: Did I come up with it, Jaron?
Jaron: You might have. I mean, alright, so the idea—
Glen: I came up with the idea of being octopus of some form, and then I think you figured out what it stands stood for or something like that. So, yeah.
Jaron: You know what? Okay. Yeah, what happened was, I report to them and Kevin Scott, who’s the Chief Technology Officer. So I’m in the office of the Chief Technology Officer. Kevin had, at one point, said to me, “I would name you chief scientist. But we already have our chief scientist, who’s Eric Horvitz, and so you need to be something else.” And then Glen was saying, “Well, since it’s OCTO—I’ve been interested in cephalopods, and I studied them and what not—so you should be octopus.” Then there’s this question, what is the “PUS”? There were a bunch of candidates, and Kevin chose prime unifying scientist.
Glen: They call this a backronym in the trade, when you come up with the acronym first and then what it stands for.
Jaron: Backronym, yeah. But I think prime unifying scientist might have been yours. I mean, Kevin chose it. I’m good with it. I think a lot of my thing at Microsoft is being sort of both in and out of it, and having a weird title is good for what I do.
Brandon: So it seems pretty apt then in that sense. I mean, unity is interesting, aesthetic, ideal, you know? I mean, it’s a transcendental and so on. But it’s also behind the grand unification theory. There are ways in which it is something that absolutely—
Jaron: Yeah, the grand unification theory does not exist, by the way. So we have to be careful.
Brandon: Right, right.
Glen: It really hits Jaron in the gut.
Jaron: I’ve worked on that one. It’s very—
Glen: When you hear people talking about it, it really hits you in the gut, right, Jaron? G-U-T.
Jaron: Yeah, the thing is, you know, I also work in that area. People have been trying to do that for more than three quarters of a century. It’s just a tough one. We just haven’t found it.
Brandon: Yeah, I know. Yeah, but it is a powerful ideal. It does seem to be something that motivates a lot of people and has disillusioned a lot of people too. Well, I want to ask you, Jaron. I mean, you’ve been involved in this field. If we could jump into talking about innovation and technology, particularly AI. I mean, you’ve been there since the earliest days with Marvin Minsky and the others who helped define the field.
Say a bit about what the atmosphere was like in those early days. I suppose, what was your experience of this field? I think you’ve had qualms about the term ‘artificial intelligence.’ What was your relationship like to some of those early pioneers, and how did the field evolve in your sense?
Jaron: Well, this is a whole long tail we don’t really have time for. But the briefest version is, I was very fortunate when I was quite young to have Marvin Minsky as a mentor. I wasn’t his student. Actually, he was my boss. I had a research job as a very young kid in a research lab at MIT because I went to college early, I just ended up. It was a weird thing. But at any rate, Marvin was part of a sort of a gang, an academic gang, with a certain idea about what computer should be. That was very informed by his interactions with Golden Age science fiction writers, especially Isaac Asimov, with also some others. Marvin was a real believer in computers as these things that would come alive and become a new species. A lot of the mythology and terminology and just the personality of AI culture really stems from Marvin as the prototype.
But the term AI actually had come about as part of a rivalry between academic gangs. In the early ‘50s, there was an intellectual and computer scientist named Norbert Wiener, who is incredibly prominent and was considered one of the really major celebrity public intellectuals. He had used a term to describe where he thought computers would go, which was “cybernetics.” The idea in cybernetics is that you don’t think of the computer as a thing that stands and has its own reality, but you think of it as part of an interactive system. He is saying that the best way to think about computers of the future is not like the Turing machine, which is this monolithic thing that’s defined on its own terms, but instead as like a network of thermometers, a network of little measuring devices that measure the world and measure each other and form this big tangle.
Mathematically, the two ideas are equivalent. But the Wiener way of doing it doesn’t give the computer its own separate reality, but instead considers it as part of a connected thing. Cybernetics comes from the Greek “cyber” which is navigation. The idea is that, by interacting with the world, this thing would navigate itself and the world. So that was cybernetics. He was very concerned with what effect that would have on people. He wrote a very prescient book, I think, in 1950—could it be that early? I think so—called The Human Use of Human Beings, which was about how, as soon as you have devices like this in the world, they’ll change people. People will use them to change people. It’ll bring about this new age of mass behavior manipulation that was never possible before. So he saw that right at the very, very dawn of computer science. It was kind of like—
Glen: 1950, Jaron, yeah.
Jaron: Yeah, 1950.
Glen: One of my favorite quotes from that book is—it actually came from an earlier version of it—he says that there are some people who believe that studying this science will lead to more understanding of human nature than it will to the concentration of power. And he said, while I commend their optimism, I must say, writing in 1947, that I do not share it. That power is by its nature always concentrated in the most unscrupulous of hands.
Jaron: Oh, my God. So look. Yeah, so Wiener, he just got the game. He cracked the game at the start. This is really only a few years after Turing. Von Neumann had defined what their idea of what a computer was, which was the thing that stepped our — so the first and, by far, the dominant abstraction for the computer came from them.
Now, in the ‘50s, Marvin and a few other of his compatriots were — obviously this was kind of like in physics these days, the string theorists versus the quantum loop gravity people or something like that. They were just like these rival gangs, right? They were like, “Cybernetics is taking over. We need our own term.” Artificial intelligence was actually initially defined at this very famous conference that happened at Dartmouth, and I believe ‘58—
Glen: ‘54
Brandon: McCarthy or something, right?
Glen: I think it was ‘54, yeah.
Jaron: ‘54.
Brandon: Was it McCarthy?
Jaron: Yeah, McCarthy coined it. I mean, McCarthy, too. Not as well. I mean, Marvin was really the personification of that more than anyone else, but McCarthy. So now the thing is, the Wiener way of thinking about computers as this giant mess tangle of things measuring each other, today we’d call that a neural net. I don’t know. In those days, it was called connectionist often, which is actually a term I kind of like. So because of this rivalry, Marvin and the other people were like, “We have to kill it.” And so Marvin and this other guy who’s great, Seymour Papert, they wrote a book called Perceptrons. The idea is that, “We’re going to mathematically prove that these guys are hopeless.” And like, “Screw them. It’s Turing machines from now on. We’re just going to double down on the thing of the computer as its own thing.” And so they proved that, in a certain absolute sense, their mathematical limitations to what you can make out of that style—
Glen: Out of single layer neural networks, yeah.
Jaron: Yeah, and it’s a funny thing. Because, yeah, sure, it’s a valid proof. But it’s so narrow that it really served more as a rhetorical and a political weapon than an actual tool for math, or engineering, or physics, or anything. But anyway, it destroyed those people. All the people working in that area were like very out of it and underground and unfunded for decades, you know. And so, a lot of what the Marvin people worked on was called symbolic. Because their idea is that it’s this abstraction, but it’s abstraction made flesh. This thing will become real. And so there was all this stuff about formal logic, and we’re going to describe the world.
Anyway, so then, of course, much more recently in this century, just when computers got big enough to have larger versions of that stuff, everything turned into neural networks. That’s what the current AI is all about. For the most part, AI is this rubric term that’s just applied to whatever. It’s a marketing term for funding computer science. It’s not actually a technical term that excludes anything. But most of what we call AI is exactly that stuff. But now it’s called AI. So it’s kind of ironic. It’s sort of like the conquerors. It was sort of like they colonized their enemy and absorbed it into their own rhetoric. But the enemy they absorbed actually had a more realistic and fruitful, in my view, overall philosophy. So there’s something that went very wrong.
Brandon: I mean, you have a provocatively titled piece, There is No AI. Right? Could you say a little bit about what your argument is there?
Glen: Jaron and I also were doing this called AI is an Ideology, Not a Technology, which is, you know.
Jaron: Yeah, that’s right. We wrote it, yeah.
Brandon: Yeah, say more about that. Because I think both of you share, I think, that the idea of this is: it’s not a thing. It’s not an entity. You’re talking about a system.
Jaron: Yeah, I mean, a lot of people in the AI world, especially the young men who work at AI startups and what we call frontier model groups, a lot of them not only think that AI is really a thing that’s there, but that it’s like an entity that could be conscious, that it’ll turn into a life form. Maybe that life form is better than people and should inherit the earth. I run into these crazy things where like some guy will say, “I think having human babies is unethical because it takes energy away from the AI babies. We need to really focus on. And if we don’t do that, the AI of the future will smite us.” It becomes very medieval. Also, a lot of times, at the end of the day, you realize, oh, this person has a girlfriend who wants a baby. They’re going through the age-old male attempt to avoid having a baby as long as possible and using AI in the service of that, which is fine. Whatever. It’s their problem. I’ll stay out of it.
But anyway, the thing is, you can think about AI equally in two different ways. If you think about figure-ground pictures, there’s an artist named M. C. Escher who’s famous for this. Most people have seen an optical illusion, where you either see two vases or a lamp and either are equally good. Just like with any big AI model, like ChatGPT or something, you can either think of it as a thing by itself, which is the sort of Minsky AI, original AI concept. Or you can think of it in the Norbert Wiener way. Pardon me. The Norbert Wiener way would be, it’s a bunch of connections of which people are a part. And if you think of it that way, what you end up with is thinking of AI as sort of a version of the Wikipedia with a bunch of statistics added. Basically, it’s a bunch of data from people. It’s combined together into this amalgam, but with a bunch of statistics as part of it. The statistics being embodied in the little connection, these pieces, if you like, of the neural net. And if you think of it as a collaboration, I think there’s no absolute truth to one or the other.
Just like if you want to try to use absolute logic or empiricism to talk about whether people are really conscious, good luck with that. You can’t. That’s a matter of faith. God is a matter of faith. There’s a lot of stuff that is not provably correct, either through logic or empiricism. And yet, the thing about consciousness is — I mean, I don’t know. If we weren’t conscious, we wouldn’t be situated in a particular moment in time, or even there wouldn’t be macro-object. There would just be particles. I mean, I think consciousness, in a sense, like the Descartes, I think therefore I am. But it’s not about thinking. It’s just about experiencing. You experience, and that is the thing we’re talking about. But if you want to deny experience, all this talking could also be just understood as a bunch of particles in their courses. So just that there’s anything here is consciousness, as opposed to just flow without stuff. But anyway, let’s leave that aside. All of these things are matters of faith. Anytime there’s a matter of faith, you can go either way with it. You might think an animal is a person or not, or a fetus is a person or not. These are really hard-edged cases. Anyway, when you can’t know for sure, I think it’s legitimate to rely on things like pragmatism, intuition, faith, even aesthetics, since this is a beauty broadcast.
Anyway, what I would say is that believing that AI isn’t there, that the AI is a form of collaboration of people, more like the Wikipedia than some new god or something, if you believe that, there are some benefits that are undeniable.
Benefit number one is, you can use AI better. If you keep in mind that that’s what you really have, you can design prompts that work better. I’ve been telling this to Microsoft customers, and it works for them. Like instead of saying, “Oh, great Oracle, tell me what to do,” say, “What has worked for other people?” All of a sudden, you get a clearer answer that has less slop. I mean, just actually work with what it is.
Benefit number two: there’s a widespread feeling, because of the literal rhetoric coming from us, coming from the tech community, that people are going to be obsolete. Especially among young people, there’s so much depression. It’s just crazy talking to undergraduates now, how many of them feel like life is pointless and their generation is the last one. They’re just going to die when the AI takes over. They have no jobs. They have no purpose. Nobody will care about them. That’s stupid. As soon as you realize that AI can equally be understood as a collaboration, then they can equally understand that there’ll be all these new jobs creating new kinds of data. And what’s amazing about that is, every time some AI person tells me, “Oh, but we have all the data we need. We can already train super intelligence,” whatever the hell that means, which is nothing.
Brandon: It’s another statement of faith.
Jaron: Yeah, oh boy, that’s like a medieval statement of faith. That’s, I don’t know. Oh, the golden calf. That’s what that is. It’s older than medieval. But anyway, the thing is, if you think that people might create valuable data in the future, it means that you also think that there might be forms of creativity we haven’t yet foreseen—which means that we don’t have all the data we need to train the AIs, which means that we aren’t the smartest possible people of all time, which means that there might be room for people in the future to do things that happens to create data that expands what the AI models can do, which suggests this open future of expanding creativity. And I love that vision.
The great fallacy of believing that computers can become arbitrarily smart is this idea that, relatively, people will not change, will not be creative, will not move. What a horrible thing to believe. I sort of feel like that’s a sin. Losing faith in the creativity of people has to be some kind of dark, dark sin and almost like a form of violence on the future. A lot of people in AI are into long term-ism and like, “We have to think about the future.” What is more harmful from the future than that fallacy? I can’t imagine any more destructive thought of the future.
Glen: Audrey Tang, my collaborator, we made a film about her life. It’s titled Good Enough Ancestor, because that’s how she likes to describe herself. She likes to be a good enough ancestor. Because if you’re too good of an ancestor, you actually reduce the freedom of the future because they feel the need to worship or exalt what you did. You just want to be good enough that you leave paths open to them, but you don’t predetermine what they, you know.
Brandon: That’s extraordinary, yeah. Jaron, thank you. Those were really, yeah, fantastic. Glen, I mean, your argument, I think, builds in many ways and parallels what Jaron have been talking about, in terms of you seeing AI more as something like capitalism, more like a system of collaboration between people rather than a thing. Could you talk a little bit about just your own journey into this? I recall you grew up in — you’re also in the tech industry, or your parents were tech CEOs, if I remember correctly. I’m just curious to know your path into this world of radical markets and radical exchange, and then how that vision of pluralism that you’ve been developing with Audrey Tang is now shaping your sense of AI and its future.
Glen: Well, yeah. So I grew up in a neo-atheist family in Silicon Valley, raised on very much the same type of classical science fiction works that Jaron was referring to. I was involved in sort of Ayn Rand world for a while. I was involved in the socialist world for a while. I became an economist. All of these things are very abstracted from sort of faith and real, grounded communities. But the thing I found that was kind of surprising to me is that most of the other people who sort of went from one apparently opposite abstraction to another, and were like alienated from any sort of grounded community, were mostly Jews raised in secular environments by grandparents who had fled the Holocaust, just like I was. And so I thought, “Well, maybe I’m not actually escaping my past. Maybe I’m just finding my own way to it.” It was at that point that I decided that I needed to learn something about where I came from and connect with Israel, connect with Jewish history. I ended up on a faculty of Jewish Studies briefly.
Then I had the opportunity to meet Audrey Tang, which really changed my life for a couple of reasons. One is that I think that Audrey is an incredibly spiritual person. There’s this character in the Tao Te Ching, which is her holy book, that is sort of like the Buddha and Buddhism, or Jesus in the Christian tradition, called the “shungren.” It’s like a mythical sage. Audrey really embodies that. And yet, she also just intellectually has some of the highest horsepower of anyone I’ve ever met and knows so many different things. I think I wasn’t ready to meet someone with that kind of spiritual depth and to accept them and to understand them, until I met someone who was also at that intellectual level. Because I had come in this intellectual way, and so unless someone was there intellectually, I wasn’t able to accept their wisdom. And so that was one thing about Audrey.
The second thing is that she was from Taiwan. Taiwan is a very different atmosphere. The division between technology and science on the one hand and religion on the other that exists in the West, it’s just not a feature of the Taiwanese environment. It was really interesting to encounter a culture where those things were synthesized rather than in conflict. That really gradually made me come to feel that the disjuncture between religion and spirituality on the one hand, and science and technology on the other in the Anglosphere was an important root cause of many of the problems that Jaron was getting at.
So let’s take his example of cybernetics. So why did the AI thing win out over cybernetics? It didn’t do it because it was the first there. Cybernetics was way more dominant in the ‘50s. It didn’t do it because of its explanatory power. Because, as Jaron points out, on the actual apparently falsifiable points, clearly, the AI people were wrong. I don’t think anyone would dispute that. Now, all the AI people would say that the AI people were wrong. It had to do with the way in which the rhetoric worked in a particular cultural milieu.
Brandon: In a secularized world that is deprived of any sort of, you know.
Glen: Everyone is like, economics, agents, utility, you know. That’s the way that everyone likes to look at stuff. Cybernetics is like, it’s got a lot of just weird, mysterious shit going on, you know. I mean, there’s all these things flowing. There’s kind of these things that can be thought of as like an agent a little bit, but they’re actually just part of the — that’s what complexity science is. That’s what cybernetics is. That’s what’s like actually going on in these systems. But if you try to explain it in a scientistic reductionist way—briefly, casually—it just comes off as like mumbo jumbo, and nobody can understand what you’re talking about. So I think the only way to describe it sort of briefly and intuitively to people is to use some kind of spiritual framework.
Brandon: I mean, just to have more resonance in Eastern societies then.
Glen: Yeah, and I think it’s because of the integration with spirituality and science in those societies. Like for example, quantum mechanics is another thing that’s very much like complexity science. It’s one of arguably the first real complexity science thing. Quantum mechanics has this weird particle wave thing. Nobody can make sense of it. Like Richard Feynman was like, “What the...” But for Taoists, it totally makes sense. Because in Taoism, there’s air slash water, and then there’s Earth. They have totally opposite principles. Like Earth collides with something, and it stops. Air goes faster when it encounters an obstacle, right?
Brandon: Right. Yeah. Neil Theise has got this great book Notes on Complexity, where he argues, from a Zen Buddhist perspective, quantum mechanics makes perfect sense because it has those similar kinds of—
Glen: Exactly. And so I think that if we’re going to try to have a discourse about technology without bringing in religion, the natural consequence of that is: we’re going to end up defaulting to really bad and harmful metaphors that come out of econ, rather than to the sort of thoughtful perspectives that Jaron was trying to welcome us towards.
Brandon: Or to build golden calves, right, which is so I think the tendency.
Glen: Yeah.
Brandon: Taylor, if I could ask you. I mean, you’ve had an interesting path from, well, Tolkien to philosophy and law, and then business, and into AI. How has that journey shaped your sense of what this thing called AI is, and what’s beautiful about it, and also what the seductions are of this particular kind of beauty that people are seeking after and building this thing?
Taylor: Yeah, certainly. Well, actually, kind of to riff off of what Glen was just saying with regard to bringing spirituality into a more explanatory understanding of things, my love of Lonergan kind of led me to epistemology, of all places. Which is, at least in the classic tradition, when you understand something new or grasping being, which means an understanding of reality and understanding of truth in some fashion, which also has analogs into beauty, of course, because you can recognize it as beautiful. In many ways, I think that understanding epistemology in that sense, where, if you actually understand something, you’re grasping a metaphysical reality, necessarily throws you into the spiritual conversation. Because a lot of spiritual traditions have very strong understandings of what that means, along with, of course, the scientific tradition.
Where I see this kind of reflecting back into AI and ends up being our own understanding of our understanding versus this thing that seems to understand, at least in some ways analogously to how we understand, particularly kind of at a more surface level, if we haven’t spent a lot of time thinking about our understanding. And similar to Jaron, I found outsized ramifications of working with our product leaders in differentiating the way in which we know from the way in which AI works in order to create better product experiences for our customers and our users. Because we understand what understanding is, and AI is not that. And being able to shape that ends up being just having outsize impact on product building and customer satisfaction as part of that as well.
Brandon: Yeah, I think it’s really remarkable. I mean, there’s something about understanding this. I’ve spent the last few years studying scientists, physicists, and biologists, mainly trying to get at what drives them to do the work they do. Many of them see themselves as primarily being in the business of chasing after a certain kind of beauty. They call it the beauty of understanding, which is that grasping of the hidden order of things, the inner logic of things. There is a profound aesthetic experience of unity or harmony or fit, without which one does not even know that one has arrived at understanding something, right? And so there’s something to that experience which is very hard to then sort of replicate with machines and so on.
Taylor, you’ve also written this fascinating piece on beauty. You call it, “Beauty will save the world,” from Dostoevsky’s The Idiot. You draw on Balthasar and Goethe and Peeper, and argue that beauty is a transcendental, and the world speaks to us in symbols. We need to contemplate beauty rather than grasp at it. I’m curious to know how that understanding of beauty relates to the kind of beauty that perhaps might be driving the pursuit of something like AGI. There is a certain kind of seduction it seems in the kind of quest that going to be in a world in which we can eliminate all human suffering, get rid of cancer, and climate change, et cetera. It seems like there’s a tension between two different modalities of beauty there. I wonder if you could speak to that.
Taylor: Yeah, certainly. I think that the directional ability to aim at those big things is a pursuit of a certain “sort of beauty.” But the pursuit of it is not the finding of it. I found in all of my innovation work that the best innovators are ones who are actually able to open themselves up in a humble sort of way to the experience of the customer, to the experience of the world around them—such that the conditions are set for that understanding of beauty or for that moment of insight. Without that kind of part and parcel of the humble pursuit of our unrestricted desire to know, you aren’t able to set the conditions for an opening or an experience of beauty. Because you aren’t looking for it. You’ve gone past it. You’re building frameworks of abstraction, rather than being able to live in the intellectual moment that needs to happen for an insight, for a recognition of beauty to happen.
One of my favorite concrete examples of this is, I have a three year old. The three year old will try multiple combinations of a particular thing in order to get at what they’re trying to do. And when they get it, at that moment of insight that I did it, that delight that kind of comes through as part of that, is identical to their experience of a flower or experience of a plane with a puppy, where they recognize the goodness, the beauty of the thing in which they’re working. I think that’s the overlap of the of the transcendental as we understand them, right? There are different aspects of that same recognition of reality in some ways.
Brandon: Yeah. Well, I suppose the challenge is, how do we prioritize reality in this particular context?
(outro)
Brandon: Everybody, that’s a great place to stop the first half of our conversation. In the next half, we’re going to turn to the spiritual dimensions of technology, how it has become a kind of religion, what faith traditions might teach us about building wisely, and how we can recover the human face behind our machines.
See you next time.
PART 2
(intro)
Brandon: I’m Brandon Vaidyanathan, and this is Beauty at Work—the podcast that seeks to expand our understanding of beauty: what it is, how it works, and why it matters for the work we do. This season of the podcast is sponsored by Templeton Religion Trust and is focused on the beauty and burdens of innovation.
Hey, everyone. This is the second half of my conversation with Jaron Lanier, Glen Weyl, and Taylor Black. Check out the first half, if you haven’t already. In this second half, we’re going to ask whether technology itself has become a religion. Jaron argues that we’ve begun to worship our own creations, and calls for a new model inspired by the Talmud. We’re going to explore how ancient traditions—from Judaism, to Taoism, to Catholic social thought—might help us restore meaning, plurality, and beauty in our technological age.
Let’s get started.
(interview)
Brandon: Jaron, you’ve written that we’re again in this context in which technology has become a religion. And it seems like there’s a certain kind of seduction to our understanding of AI and new technologies that is a sort of idolatry. You’ve argued for the need to make our technologies more like the Talmud. Could you say a bit about that?
Jaron: Yes, so I mentioned earlier that you can think of big AI models as forms of collaboration between people, a little like the Wikipedia with a bunch of statistics. Okay. So an interesting thing about the Wikipedia is I knew the founders. I used to argue that there was this fantasy in the computer world, which was very more leftist at the time. It was very different back then. The idea is that we’re going to help the oppressed dissident in a difficult regime. And so we want everybody to have pseudonyms. We don’t want to know who the real people are. But the problem with that is that, when you forget people and you turn them into a mush, you concentrate the power on whoever owns the computer that runs the mush, right? And so very much, as Norbert Wiener warned, there are times when you want to do that, of course. But to do it as a general principle actually undermines humanity. And so there’s no easy universal answer, which shouldn’t surprise us.
But at any rate, the Wikipedia created this illusion of what’s sometimes called the view from nowhere, this idea of the single perspective, instead of a multiplicity of them. And so then they would say as well, if you want to have a bunch of people collaborating, that’s going to happen. There’s no way around it. And like, there is a way around it. It’s ancient. So in Jewish tradition, there’s this document called the Talmud, which is one of our central cultural documents. The idea of it is that you have generation after generation of people adding to it. But at each generation, there’s a particular place on the page, a geometric designation, where this is from ancient Babylon, these are the medieval people, and so on. And so you have this amazing amalgam across centuries and centuries in a single document, where it’s very clear that these are different perspectives. They’re all on the same page. And this was done when writing stuff down was expensive, you know? It would have been cheaper to just combine these voices. There was an absolute economic motivation to not do this. Brevity was a matter of severe economic motivation in those days. And so the fact that they did this is incredible.
Now, part of it is just that Jews like to argue, and we want to be individuals. So a part of it is just our character. But the point is: this is a proof of concept that predates Greece. I mean, this is like ancient. So what is hard about this? What’s hard about it is just that this present ideology of creating this new kind of golden calf, this abstract thing, so everybody else will be subsumed by it, but will be the magic tech boys who get to run it will be the elite special ones. Which never is true, by the way. You always end up getting screwed by your own monster. That’s another ancient idea that has been known for a long time. So that’s a fallacy too. But it’s a different fallacy.
Anyway, yeah, so the Talmud is a wonderful prototype for how to combine people without losing people, or combine human efforts without losing human identity. There is room for anonymity. People can vote. The voting can be anonymous, but you still know who the other citizens are. You don’t pretend that there wasn’t anybody. Money anonymizes. You lose track of where a particular dollar has been. It’s not even meaningful. That probably helps people cooperate despite their feuds. A measure of anonymity actually can be good. But as a general principle, it’s easy to overdo it and really lose people. That’s kind of what we — we did it with the Wikipedia. All the AI things train on Wikipedia. It’s sort of inadvertently legitimized this idea that losing people is somehow a form of productivity, when it’s exactly the reverse.
Brandon: Yeah, because it creates that illusion that there is an entity that is able to somehow provide a synthetic answer, right? I think the challenge, I suppose, is once we’ve erased authorship, once we’ve erased the sort of individual sources of all of this knowledge, is it meaningful to talk about responsible AI, or ethical AI, or anything of that sort? I mean, Glen, I’m curious to know what you might think about. I mean, you seem very bullish about the prospects of AI systems in terms of fostering democracy and pluralism. I’m just curious. Given this context, how do you see us concretely being able to bring about that sort of recognition of the human collaboration that is hiding currently behind these illusions?
Glen: Do any of you guys know what the oldest document that, to my knowledge, has been made by human hands that looks like a recombinant neural network is?
Jaron: That’s a great puzzle, Glen. That’s a great puzzle.
Brandon: I think you’ve told me this, and I’ve forgotten.
Jaron: What is it? What is it?
Glen: So there’s a diagram of how voting for the Doge of Venice in the 13th century looked. There was like 100 councils. Like each person who was in the voting population would elect members of 5 of those 100 councils. And then those 100 councils would elect another 100 councils according to similar principles for several rounds, until you eventually elected the Doge, which looks almost exactly like a recombinant neural network. Because you have lines from each of the voters going out to the net things that they do. And so it’s like a whole neural network. I think that that is like a beautiful illustration of the fact that sort of like during the example of the Talmud, we have these actually incredibly ancient and sophisticated ways of thinking about democracy and agency and collectivity that actually massively predate anyone thinking about AI at all, and give us the actual insights that we need to produce effective systems like this.
My hope is that we can just stop being so thoroughly mugged by the enlightenment. I love the enlightenment. Enlightenment is all kinds of goodness. It’s just when it becomes an overwhelming ideology that wants to erase all other meaning and truth and all of the past, rather than integrate with it, that it becomes sort of an excuse for really destroying itself. As Jaron was pointing out, you know, destroying its own foundations. And so I think that it’s people of faith that give me hope. Because they just don’t really want to do that. They’re cool with modernity for the most part, and they want some of the stuff. But they also want to remember that there’s more to things than that, and that there’s history and richness. And if we can just integrate those things a little bit more, give a few fewer sideways, glances, or whatever that Taylor was mentioning earlier, I think maybe we would do a better job of building our tools, do a little bit better job of not having this ridiculous hype and bust cycles that are painful for a lot of people, and maybe get more quickly to the actual deployment and integration of these technologies.
Brandon: Do you have a sense of concretely — I mean, again, one of the other challenges, too, is even our language around AI has been colonized by large language models, and whatever is happening at a few, small players. What concretely do you think needs to happen, to change, in order to be able to actually transform things?
Glen: Well, I mean, culture is an inspiration. Like The Wild Robot film, I think, is just absolutely fabulous. I think it’s exactly the way that we should be conceptualizing these things. That was very well received. One phrase I’ve been using a lot recently is: be the super intelligence you want to see in the world, you know?
Brandon: That’s great. That’s great.
Glen: Like, you know, corporations are super intelligences. Religions are super intelligences. Democracies are super intelligences. By every definition of super intelligence that’s been given, they’re all super intelligences. We don’t bat an eye at those. And so why are we talking about AI as if it’s this weird external conquer? I’m not saying corporations or religions haven’t done any harm. They’ve done all kinds of problematic things.
Brandon: Yeah, I think we’re maybe waiting for our charismatic Robo savior or something, right? Taylor, you’ve been working closely with the Vatican. You were just at the Builders AI Forum. Pope Francis called for “algor-ethics”. Pope Leo’s vision is emphasizing importance of human dignity and ethics. Could you say a bit about what’s happening at the Vatican and what those efforts are inspiring in you?
Taylor: Yeah, certainly. In some ways, it’s similar to what Glen was just articulating here. It’s kind of, get over yourselves, and let’s work together to have this technology serve humanity. Right? Every technology that we’ve ever come up with ends up being a result of our, from the Vatican’s view, co-creative power with the Divine. And so let’s continue trying to shape that towards human flourishing, rather than the other ways in which we’re able to shape it as independent actors. Really interestingly, too, I think the collaborative nature that the Vatican is really asking us all as technologists to approach—technologists and academics, in fact—when approaching this, I think, is really a great direction that resonates with a lot of us as well.
Brandon: Great. Thank you. Are there any points of friction between your views, the three of you all? I mean, maybe there may be different points of emphases, but I’m curious if there are questions you all have or points.
Glen: There is one thing on which Jaron and I, I think, see a little bit differently. I don’t think it actually ends up mattering in many cases. But I think Jaron’s first inclination tends to talk about sort of the uniqueness of humans and have that really strong emphasis, on some level, on Mago Day, or as we would say in Hebrew, Shalom aleichem. But I tend to have a little bit more primary emphasis on diversity. I certainly see the importance of humanity, but I also see things in nature. I see things potentially in machines or in complex human systems or whatever. I’m not as focused on sort of the human individual as a focus in my mind as much. What I tend to resist about AI is sort of its totalizing narrow singularity. Like here’s-the-thing attitude more than the fact that it challenges the Shalom aleichem, you know?
Jaron: Yeah, I’ve also detected that disagreement. But I think the reason for it is a matter of our professions, our disciplines. So I’m a scientist and technologist, but the technologist part is what I really want to focus on for a second. You can’t define technology without defining a beneficiary. Because otherwise, there’s nothing there. It just completely evaporates, unless it’s for something or somebody. You can define math without a beneficiary abstractly. You can define the quest for knowledge and science. I think you can even define art as a kind of art for its own sake thing. Whether you should or not is different, but you can do all those things. But it’s not even possible on any sensible basis to define technology without a beneficiary. There’s just no way to even talk about it. It’s gone. Technology is for doing something, for some purpose, you know? And so the question is, who is the beneficiary?
Now, I think sometimes a beneficiary should be Gaia or the overall ecosystem of Earth. I’m not saying it’s exclusively people. But in general, if you underemphasize the human being as a beneficiary of technology, you very, very, very quickly slip into technology for its own sake, which is never what it actually is. It’s always technology for the sake of the giant ego of somebody who owns a big computer server. So it turns into this kind of Gilded Age, unsustainable ego trip by a few people who don’t acknowledge it. So you have to define technology as being for people. You have to really emphasize the specialness of people in order for technology to even be defined. Lose people, lose technology. That’s the only way. So I think that’s the reason that we have this different sensibility.
Brandon: That’s great, yeah. Taylor, any thoughts on your own tensions?
Taylor: Yeah, I don’t know. I don’t know if we fought enough amongst the three of us to really have determine where I land on that side.
Brandon: Well, I think you should start, yeah.
Taylor: Yeah.
Brandon: So maybe, perhaps, if you all could leave our viewers and listeners with maybe one point each on what you see as really the beauty of this technological development, what it is that you all are working on. I know you’re not representing Microsoft, but you are certainly trying to build something there in your various capacities. And so, where is it that you see the beauty moving forward in the work you’re doing, and what particular kind of burden or obstacle do you think is really critical to overcome? Maybe, Taylor, we’ll start with you, and then Glen, and then we’ll end with Jaron.
Taylor: Sure. Yeah, I think this technology throws into sharp relief the ability for us to understand really how we think. We found a lot of the success of using this technology in productivity ways ends up having certain meta-cognitional strategies on how you use it as a helper, rather than have it do your thinking for you. And so, I’d say, lean into your own understanding of your understanding as you work through your use of these tools—both in order to ensure that you continue to flourish as a human, but also really to use these technologies where they shine most.
Brandon: Any obstacles or burdens that you think are really critical to overcome?
Taylor: If you don’t do that, you’re going to get dumber, and that’s problematic.
Brandon: That’s already happening, so, yeah. Thank you. Glen?
Glen: There’s an image of science and technology that I think is sort of implicitly in the minds of a lot of people that I want to suggest we need to flip on its head. I think a lot of people imagine we’re on the surface of the earth, and there’s a deep ground of falsity and superstition beneath us. We kind of need to dig it out and throw it away to get down to the core of the truth.
I instead imagine that we’re on the surface of the earth, and we’re planting trees. And as those trees grow up into the infinite abyss beyond, we extend the atmosphere, you know? The biggest danger is that there’s too much space. We don’t get down to a point. Actually, it’s how do you even allow the cross pollination across all those different things so that they can keep growing? But the further we grow out, the closer we are to having nothing at all that we understand because the more we see of the infinite abyss beyond and the more space there is to grow into. I guess that feeling of pursuit of a truth that recedes ever further, that by pursuing the truth, by extending our technologies, we see even more completely how little we know. That is what I take solace in, I guess.
Brandon: Yeah, Marcelo Gleiser has this book and an analogy called The Island of Knowledge. It’s this very similar kind of thing, where you’re on this island, and you’re sort of expanding what you think are the horizons of knowledge. You think you’re going to get to the point where the water has completely been conquered. Then you realize the further your island is expanding, the further the water seems to expand, and you never quite get to that end. For some, that is threatening and frustrating. And for others, that’s immensely beautiful. And the burden, the challenge, the obstacle, Glen, that you see as a burning problem that needs to be addressed?
Glen: I think that, ultimately, it’s a question of culture and meaning and vision for all of this stuff. I hope that we will come to a point in the Anglosphere where we do have peace and cooperation and integration between that sense of wonder and belief in things we cannot grasp that religion gives us and our sense of building. Because I think we’ll be able to build much more and much better when we can do that.
Brandon: Thank you. Jaron?
Jaron: Okay. On the question of beauty, I think there’s a common idea of beauty as platonic thing, that beauty is some sort of this abstract thing apart. And as you might guess, based on what I said about AI and all that, I think that’s the wrong idea of beauty. I think the idea of beauty—as this abstract, still thing that’s apart from people that we sort of try to access and approach—is it might have been functional in the past. But at this point it it doesn’t serve us well. It just has terrible economic — because basically, the way computer networks work, they’re very low friction. There’s this thing called the network effect that’s exaggerated, where all the power and wealth concentrates at the center. And so, basically, whoever owns the network becomes beauty, if that’s beauty is. Whatever, all the artists who are trying to make do as wannabe’s on YouTube are really celebrating Google more than themselves, at the end of the day, which you can see if you look at the accounting.
And so, anyway, the issue is that we have to think of beauty as much more of a connected kind of a thing. Beauty is not a thing apart. Beauty is a thing that people do. It’s a thing that is meaningfully created between people through shared faith. That has to be the idea of beauty. I like Glen’s metaphor a lot. And I should mention that as an island gets bigger, there’s more beach, right? That’s the thing is. The more knowledge, the more mystery. Part of why I play all these weird instruments is, every time you start to play some instrument from another time and place, your body enters into the rhythms and the breathing of those people. That connection, that makes the instruments interesting. It’s not any abstract, like, oh, this instrument solves a particular problem. Right? And so you have to think of the groundedness, real experience, and real connection as what beauty is—not as a subtraction.
Then the big unsolved problem. Here’s the one I’ll mention in the context of this conversation. Almost all of the kids — I say kids because, I mean, it’s hard to find somebody in a frontier science or engineering group for AI who’s not under 40. And they’re very, very few. They’re just starting to have kids here and there. But mostly, they haven’t had kids yet. They mostly don’t have a connection to future human generations. They’re mostly, if we’re honest, a little spectrum, either mostly male. They mostly don’t think of family or continuation as much of a thing. That’s an abstraction to them. Or if they do, they might, it’s like purely biological. Like, “Oh, my genes are great. I’m going to make sure there are a lot of babies that have them or something,” like our friend, Elon.
But the thing is, their ability to speak is through the stories they grew up with, as is true for all of us. The stories they grew up with were not the stories from, oh, I don’t know, American mythology. They were not the stories from the Bible. They were not the stories from literature. They might have been a little bit stories from children’s book. But what they mostly were in their formative years were the stories from science fiction movies. And so, if you ask why are all of the AI people so enthusiastic about saying, “Oh, we’re building something that will kill everybody. Isn’t it great? Give us more money. Yes, you should have more money, more money. You’re going to kill everybody. It’s great,” well, how could that happen? What’s the explanation for that absurdity? It’s that the myths they grew up with that are the stories that form their vocabulary for understanding the world are not Newton and Einstein. It’s The Matrix movies.
Brandon: Or Tolkien, yeah.
Jaron: Or Tolkien. Well, for some, it’s a version of Tolkien, actually, because those movies were big too. But as far as technology goes, it’s The Matrix movies and The Terminator and so on. Those are the stories that exist. And so when you can only tell the world through those stories, that’s our vocabulary of dynamics. And so if the stories you know are limited to a certain kind of story, so are you. So I think there’s a really urgent cultural problem here. The only science fiction that transcended that problem, the only positive science fiction that wasn’t sappy, that was commercially successful, was Star Trek of a certain era of the ‘60s, perhaps. It was the ‘90s, definitely. The Star Trek franchise has turned into just a version of a Marvel movie for the most part. The Marvel movies, don’t even get me started.
So the thing is, we’re giving young people a profoundly impoverished and stupid set of stories to work with. That, to me, like Silicon Valley has failed people a lot. But Hollywood, maybe more so, and in a way more innocently. Because I knew the people who made some of the movies I’ve just referred to. It’s not that they were bad people or even lazy people or anything, but it’s just that they were working from their particular context. And as it translates into giant context, it becomes really dysfunctional. We have a big problem with that. That’s the problem.
Brandon: Yeah, thank you. I mean, it is a big challenge with the shaping of the horizons of imagination, right? I think we’re still prey to a kind of logic. In the last time I was with Glen, I was talking about this logic of domination, extraction and fragmentation that governs a lot of the development of our technology and business. I think moving to a different logic of reverence and receptivity and reconnection is really, which is what we see in something like Tolkien, right? It’s a very different kind of logic. You see that tension and forming imaginations.
Jaron: Here’s the thing about Tolkien, though. I read the books when I was little, right? I haven’t seen all the movies all the way through, but I’ve seen enough of them over pretty good feeling for there. So almost everybody now knows them through the movies, right? And this is, oh God. So my very first gig as a musician was playing music behind. It’s a long story. But anyway, I used to do gigs with — oh, who’s the guy who wrote The Hero with a Thousand Faces? Joseph?
Glen: Campbell.
Brandon: Campbell.
Jaron: Campbell, yeah. And so when I was just a young teenager, like an adolescent, I was doing shows with Campbell. Because I was playing music behind this wonderful new age poet guy, and they would have double bookings. But anyway, his name is Robert Bly. I used to argue with Campbell, even as a kid. Like, “How can you say there’s only one story? Your story is kind of like a nasty one because it’s about this hero. The problem with heroes is that there’s somebody else who the hero has to be. Like there’s always this other side to the story. Doesn’t this kind of bother you?” He was like, “Oh, kid, you won’t know anything.” And I’m sure he was right about that.
But the thing is, the Tolkien books have a certain magic and reality to them that is the best kind. They have a kind of a nobility or something. I feel like in the movies, it turned more into a Marvel thing of like, “We’re going to go and kill these horrible demon things. We’re going to go fight. We have 50 cuffs and whatever.” And so the thing is that the version of Tolkien that came out that most people know is maybe not — it’s a more Campbell-ian thing than I think the original actually was. At least, as I remember it, it was a little more charming and joyous.
Brandon: Yeah, there’s a sense of deep magic, a sort of a reverence for something you don’t create, right? It’s something that is given to you that you were in service of. So all of the questing and so on is not primarily a hero, but rather a kind of calling.
Jaron: Yeah, I mean, I remember the Tolkien books being kind of more like the Narnia books. Maybe I’m misremembering them. I don’t know. Maybe I have it wrong. But the Tolkien movie was more like a Marvel thing or whatever. You know, not that there weren’t some good things about them, certainly.
Brandon: Yeah, these are critical tensions. Well, I can’t thank you all enough. This has been such a fantastic conversation. How could we direct our viewers and listeners to anything that you all are doing, working on? Taylor, where can we point people to?
Taylor: Oh, certainly. Yeah, I mean, I opine on my Substack on occasion. That can be a good place as any for encountering some of my work, for sure.
Brandon: Okay. We’ll put that in the show notes. And Glen?
Glen: Glenweyl.com is my website. We also have aka.ms/plural for the Plural Technology Collaboratory. You can find me on X @GlenWeyl.
Brandon: Fantastic. And, Jaron, where can we direct people to?
Jaron: Oh, I have a crappy old website. I don’t have any social media. I kind of operate my life on this idea that people who need to find my stuff will. I don’t really promote myself, and I’m really bad about it. And somehow it works out. So I just ask the wind.
Brandon: That’s right, yeah. Yeah, fantastic. I know we’re past time. But is there any chance, Jaron, that you might be willing to, for 60 seconds, play us something from one of the thousands of instruments behind you?
Jaron: Oh, God. Well, what do you feel like?
Brandon: Whatever you’re in the mood for.
Glen: I request the oud, Jaron, if you have one.
Brandon: Oh, yeah. Let me second that.
Glen: Jaron, I think the oud is one of Jaron’s favorites.
Jaron: Let me see. The thing about ouds is you never know which oud will be in tune. So there’s a famous joke from the composer Igor Stravinsky, that harp players spend half their time tuning and half their time playing out of tune. But the thing is, that joke was originally from an oud book that’s like several 800 years old. So how in tune? Eh, we’ll live with that.
(Jaron plays the oud)
So it’s out of — I shouldn’t. Okay, it’s out of tune.
Glen: Well, that’s great.
Brandon: It’s alright. Thank you.
Glen: It’s wonderful. Thank you so much.
Brandon: It’s still enough to transport you. It’s still enough to transport you.
Jaron: That’s the thing about the oud that’s just like you’re on, yeah—
Brandon: Amazing.
Jaron: But oh, I think la, la, la, la, la, la, la... I don’t know.
Brandon: Well, thank you. Thank you so much.
Glen: Thank you, everyone. Take care.
Brandon: I can’t thank you guys. It’s been amazing.
Glen: Live long and prosper.
Jaron: Okay.
Brandon: Yeah, you too.
Jaron: Bye, Brandon.
Glen: Bye, bye.


