We’re living in a moment when artificial intelligence promises to make everything faster, smarter, and more efficient. But at what cost to the fragile, messy, beautiful reality of being human?
A lot of the public conversation about AI focuses on capability: what these systems can do, how many jobs they might replace, how close we are to super-intelligence or AGI (Artificial General Intelligence). Much less often do we ask what AI is doing to our fundamental human capacities for love, grief, and self-understanding. If our tools increasingly mediate our relationships, shape our attention, and even mimic the people we’ve lost, what happens to our ability to face weakness, suffering, and mortality as part of a meaningful life?
Addressing these questions is my latest podcast guest, John C. Havens, who is Founding Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. John helped build the IEEE 7000 Standards Series, now one of the largest bodies of international standards on AI and society. Today, John serves as the Global Staff Director for the IEEE Planet Positive 2030 Program, guiding efforts that prioritize both ecological and human flourishing in technological design. But his perspective on AI doesn’t begin with policy or engineering; it starts with love, vulnerability, and the deep spiritual questions that have shaped his life.
In our conversation, we explore how AI and data-driven systems affect our agency—our ability to choose, to say no, to recognize when we’re being manipulated—and what it means to design technology in a way that honors the inherent worth and dignity of every person. We talk about why anthropomorphizing chatbots (e.g., “How can I help you today?”) is a choice that has ethical and spiritual ramifications, how surveillance capitalism has prepared the ground for generative AI, and why talk of “uploading consciousness” and “superintelligence” often masks an underlying eugenic logic.
We also ask a more hopeful question: if, as Karl Rahner put it, “the inmost core of reality is love,” what would it look like to measure our lives (and our technologies) by gratitude, altruism, and purpose, rather than by optimization? And what unique role might faith communities play in an age of automation, where the temptation to replace difficult human relationships with machines is growing stronger?
You can listen to our episode (in two parts here and here), watch the full conversation on YouTube, or read an unedited transcript below.
Brandon: Hey, John. Thanks for joining us on the podcast.
John: My pleasure. Thank you for all the wonderful work you do on beauty.
Brandon: Thanks. Thanks, John. Well, let’s get started. Speaking of this, speaking of beauty, I usually have my guests begin with a little story about a personal experience of beauty. So do you have a memory that comes to your mind of a profound encounter with beauty from your childhood?
John: I do. The first one that came to mind when I read your email with that question—which I think is such a good question—was when I was a junior camp counselor in Wellesley, Massachusetts, in what must have been—join me in sharing my age—the late ’70s. No, probably the early ’80s because I’m 56, so it would have been like ’82, something like that. So I was like 14 or 15. I remember I was put in charge of this kid named Ernesto. My camp counselors, they weren’t jerks; that’s not a beautiful way to say it or talk about people. But they were like teens, older teens. They said, “Take care of Ernesto, this little boy,” Hispanic by his name, and I could tell. I just couldn’t figure out why he was always running really slow. He was quite small. At the time, I didn’t know what cerebral palsy was.
Brandon: Oh, wow.
John: It was like three days after I was taking care of him that one of the counselors pulled me aside. He said, “You know, Ernesto has got cerebral palsy.” Looking back, I’m like, you didn’t tell me that from the get-go. But it completely shifted my perception. This is what I thought was a beautiful memory. It was mainly about him. I still haven’t ever seen him again and I kind of — I don’t know. He probably would be like if you meet someone, it might shatter your memory. But the thing about him is, once that shifted — meaning, when someone told me that and I recognized — I didn’t really understand. I knew some of the aspects of it. Anyways, we became friends. I accepted that his pace was what he could bring. He still worked so hard. When he ran, it was a gift for him to run, so it was like a big deal.
Anyway, the real beautiful part is, at the end of every day, at our camp, we would walk kids to their parent’s cars. The cars would do circles and pick up the kid. Jonathan Smith, Ernesto’s mom. And so, the last day he was at camp, I put him in his car. He spoke Spanish more than English. I didn’t speak Spanish at the time. I do now. As I was clicking him into his car, he said, te quiero. It means I love you. It still brings tears to my eyes because whatever level he had, whatever the condition actually brought, he was still Ernesto. The fact that the gift to me, the beauty, was that part of my language but being a dumb ass on whatever level where I wasn’t being kind to him no matter what the case, because that’s the job as a counsellor. The gift, I think, from him to me was recognizing, maybe he didn’t know. Then we kind of move beyond that, and it just became John and Ernesto. And the fact that he said I love you at the end of that, it stays with me to this day.
Brandon: That’s amazing. Yeah, that is truly beautiful and profound. It’s one of these things that, as I was reading your book, Artificial Intelligence, the value of our humanity, the value of suffering, the value of even living with disability, with illness, and recognizing that, all of those aspects of the fragility of our condition still have something beautiful and worth treasuring about it, I think, is really under threat, right? I’m curious to get, maybe if you have a sense of what allowed you to recognize that beauty at a young age. Because there may be many kids for whom suffering and disability are things to write off. Especially, there’s a kind of machismo among young men that doesn’t want to recognize and see anything valuable about weakness.
John: Well, I think the word ‘weakness’ is an interesting word, you know. I really appreciate you reading the book. Seriously, Brandon, thank you. Because I don’t know all of the reasons. I wrote it in 2015. It came out in 2016. I still never had someone come up and be like, “Hey, AI ethics. Write about it.”
Brandon: Right.
John: My dad was a psychiatrist. He passed away in 2011. My mom was a minister, and I was an actor for years. So studying the human condition, a pretty big part of whatever, who I am. Then I went through a divorce and COVID. I think there, at least for me, now, at a very deep level — and in 2016, I would’ve already lost my dad in 2011. Weakness is an interesting word. I think more and more there is a quote from a philosopher named Karl Werner, who I read about in an encyclical from Pope Francis about love, the encyclical about love. Karl Werner’s quote is — I’m just making sure I’m correct. Let’s see. The core — is it the core? I’m going to get it wrong, so I will send it to you, the correct version. I should have it in my mind. It’s basically, oh, yeah, the inmost core of reality, of love.
I bring that up because I think, for me—I’ll even say I know. I’ll take that risk with my friend, Brandon—I think all humans are more interested in being loved than they are in being smart. In that sense, weakness, when it comes to artificial intelligence, I think there’s all of these statements that are really tedious to me. Where people are like, “Well, humans make mistakes. Why shouldn’t we trust machines because they don’t make mistakes?” When, of course, A, they do. Secondly, humans designed the machines. But without trying to be negative towards the potential of the machines, the algorithms, the outputs, is to question why. Why are we doing these things? And if the logic of being weak physically in any way, if someone saying you’re weak because you’re not as smart as whatever is the world that we live in—which we kind of do from a key performance indicator side of things—that’s a cool reason I think I did write the book. I’ve been writing similarly. It’s to sort of defend what I think is not just like a tree hugger, bleeding heart, anything side of things, but a person who, at least in my case, losing my dad and my divorce were two of the fundamentally hardest things I’ve ever gone through. And when you get broken with whatever it is that breaks you, there’s where you really identify what is it or who is it that brings me comfort and peace. I love books, I love information, I love AI, I love tools. It was humans loving me that kept me sane. It was my recognition that I’m not only not interested in being perfect. I don’t know what that means. But I figure if I can wake up every day and try to love myself or other people or also nature better, then that’s a good day.
Brandon: Yeah, thanks, John. You talk a bit about — I mean, you mentioned you don’t quite know why you wrote the book. But your career trajectory has been quite unusual, right? I mean, you’re an actor, a musician, a journalist, and then you’re an expert in positive psychology. I think your first book was on that topic, and now tech policy. Could you walk us through just very briefly how did you end up studying AI systems and AI tech policy?
John: Sure. It’s kind of you to call me an expert. I would say I’m a person who is fascinated with it. In my 2014 book Hacking Happiness and my 2016 book Artificial Intelligence, I do talk a lot about positive psychology and quote from heroes of mine in the space—the Martin Seligman’s. I always mispronounced Mihaly.
Brandon: Csikszentmihalyi, yes.
John: There it is. Thank you — who wrote Flo. Barbara Frederickson. So there’s like the Titans of positive psychology where I’ve learned any of their stuff and then incorporate it to my work. It’d be lovely to be called an expert. Anyway, the point being, I think it was my desire to be a minister in high school. I don’t know why, I mean, if we’re going that far back. Because I think the nature of your work, which I really appreciate, is tender. As a person of faith—meaning, I believe in Jesus at the time where you love other people versus judge them and condemn them—I really learned about it from my parents.
My dad, although he was a psychiatrist, I recognize when you say someone has anger issues, as people have said about me, it feels very condemnatory just because it’s very vague. It’s like everyone has anger issues. But what you’re saying is, hey, sometimes it manifests in ways that the passion or intensity might throw people. That’s helpful. That’s a useful critique. But I bring that up because, as a kid, I just knew that my dad, when he was home, he kind of watched what we were doing. He was never violent towards me or something, but he spanked and yelled. A lot of times, also, when he came home, there was a sort of like, “Dad got home from work.” It’s quite common amongst humans. But it’s when my mom accepted Christ — I’ll use that term vaguely because that’s not what your show is about. Vaguely, in the sense that she went from just holding a book that she called the Bible and going to church—which we did at a Methodist church in Massachusetts—to… I saw her. She was already an amazing, wonderful human, but her demeanor changed.
Six months later, my dad really hurt his neck. He was sitting with this horrible medieval contraption back in the ‘70s. This was the ‘70s. My mom would put a stack of books next to him, the top of which was a book of the Psalms, the Jewish scripture. My dad said — he was very upset one day, leaning into the closet, looking into the dark. We went the metaphor. He picked up this book of the Psalms and started reading. He said he felt his heart changed. For the next year or two, I actually felt that he transformed who he was. He used words like “Jesus” and “God” and whatever out. But I saw him change his demeanor towards me and others. That meant that when I was 13, I accepted Christ. For me, that journey went from in high school — this is so I’m giving you all this background. In high school, I did what a lot of people do, I think, with any new faith, any new thing you’re excited about which is proselytized. I talk too much and not listen. I think I could convince someone with my words or proving with historical accuracy, which is a lot about, especially New Testament scripture I learned about in college. Really exciting stuff when you really read any historical document.
Anyway, then I got to college. At the college, it was what’s called Brethren in Christ—very conservative from my background—where we couldn’t drink, dance, or smoke. There, I sort of had a wonderful — I’m taking this long to talk about it because my “faith” is, A, oftentimes wildly hypocritical. Because I judge and don’t love people well. Then secondly, it came from a high school experience where the secular setting being the Christian geek. Then I went to a place were like it was hyper, kinds of Acts-focused. Let’s call it “letter of the law” versus “spirit of the law.” I’m being judgmental. There, when people live their faith, it really inspired me. That’s then what launched me to my acting career. Because my acting teacher in that college was the kind of crazy liberal guy. Then I went to New York City. From that point on — I can skip all the details, so you can ask follow-up as needed. But really, I think what I was gifted with was parents who had demonstrated the life that you try to live—loving others. At least my dad, being a psychiatrist, listened; that was his job—50,000 hours of listening to people. Then when you observe life, I had a sort of natural empathy that made it easier to try to scrutinize outputs of our lives. Then things like I got into writing and marketing and PR, which are kind of outputs of that. Then when my dad died, that’s the positive psychology side. It’s sort of an homage to him. Then my mom probably from artificial intelligence and ministry, and now the work at IEEE where I work now and AI and ethics and now focus on sustainability continues to evolve. Although the last couple of years with GenAI, it’s become a lot more challenging.
Brandon: How so?
John: This is not about IEEE where I work, so I’m going to put that aside.
Brandon: Sure.
John: I’ll tell you what, Brandon. I think more and more — I’m just going to say it. I think this is the first place I’ve said it to a person versus writing it. I think all use of GenAI, any GenAI, is irresponsible. Now, I’m going to qualify that. The outputs that you or others may use it for, I’m not going to tell Brandon — I can talk about you in the third person. I’m not going to tell you, “You use ChatGPT to rewrite something, and when you look at it, you feel good about it.” That’s your subjective truth. I honor that. What I know, in my experience, especially with data, of my 2014 book that was focused on data, is: humans don’t have access, certainly Americans, to their data. If you know books like the 2019 seminal book, The Rise of Surveillance Capitalism by Shoshana Zuboff, that sort of put so much clarity. The first chapter of that book is monumental in its paradigmic level of understanding—how in 1776 the nature of consumerism started to form the basis of the surveillance economy. All these big words just mean like we have not been trained as humans to recognize how precious our data is. Because it reveals who we are, but it’s not all of who we are. When other people take it and kind of feed it back to us, GenAI has accelerated so many of the worst parts of these tools.
Fundamentally, I’m still on the Screen Actors Guild. As a journalist, I wrote free books, many of which apparently now through no protections of mind that I had are being used, subsumed by different tools because they are deemed fair use—which of course is a ludicrous use of that term. Intellectual property, especially from a lot of GenAI designers and creators, the logic of we need more data for our systems has nothing to do with them getting permission from you or others. I don’t understand, Brandon, why humans of many type don’t recognize. Just because actors in the Screen Actors Guild are like, “I make money from this face. I want to protect it,” then they go, some people, “Well, all things must change. Don’t hinder any innovation.” Then I’m like, it’s a face. You have the same face that I’m fighting for you.
Basically, the use of GenAI tools—this is before we get into energy and water—largely because it’s not narrow, testable tools, I think it’s irresponsible. So I’m going to keep doubling down on that, because I’ve been talking about AGI being ludicrous and essentially occult for years. GenAI, I think it’s just more to say, if someone can’t say how they’re being responsible about it that satisfies not just me, but all the people that I respect so much in the space, hundreds of them, then my answer is, “Look, I get that they’re cool. I use ChatGPT sometimes—once in a while, not that often,” I get the allure. Especially, from a spiritual sense, what is produced, it’s harder and harder, I think, for people to say, “Oh, those are the words that I created,” versus, “This is the aggregated, really just slop, morass, especially from synthetic sources, where no one knows how to cite an original author. By using these tools unwittingly, out of ignorance, not necessarily designed, people don’t recognize you still are mitigating, lessening, and harming all human creativity. Because anytime you use these tools and whatever the result is, you can’t identify where it came from. More and more, you start to go, like, “Did I write that?” I read my books from years ago. I’m like, “Oh, that’s pretty cool. I guess I wrote that.” I know I did.
Anyway, that’s a long answer. There’s a lot of other reasons. But basically, the leadership, too, of the companies. Always talk about when AGI is going to come and basically just remind people pretty overtly that they are not happy about humanity as it stands, but yet a lot of parts of society are giving them so much power to do so.
Brandon: Yeah, there was a lot more I want to double click on these themes. Let me ask you some questions about some of the issues that you’ve raised. I mean, even 10 years ago, you were writing about just this sort of complex, seductive allure of AI systems. One of the risks you’ve pointed out was that — you said that our desire for introspection, our capacity for independent thinking, our ability to even appreciate the benefits we already possess were at risk. Could you say a bit about how was it that AI threatens—especially when it comes to these questions of faith and spirituality—our capacity for introspection for even just reflecting on who we are come into the creeps of our basic sense of humanity? How is that threatened by these systems?
John: I think most of all has to do with agency, which I talked about at different times. I think it’s a really challenging subject because to have agency around the concept of APG is a challenge. The example I’ve been giving at least myself and some others recently is the concept of theater. As an actor, I got very used to performing, and I’m very conscious of when the lights were going down. Because as a professional, your job leads up to that moment. You’re already working. Then when you’re backstage, the sounds of humans watching something is really interesting.
I only did one Broadway show, but in that Broadway show, I played harmonica. I would roll across the Richard Rodgers theater on these wooden stage on wheels. Playing harmonica while bouncing was really challenging. But the thing that was just exhilarating, just looking out and seeing—I think it was 1,200 or 1,600 people—all looking at these dancers where the spotlight was. I knew that no one would be looking at me. Even the eye senses motion, they were looking where the spotlights had to look. So I could just look at these people unencumbered, just staring at a show. And so I bring a lot up to say, in a theater, when the lights go down or a movie theater, no one gets on the microphone and says, “Okay, everybody, don’t be freaked out. The lights going down are a symbol that you’re about to see a show.” It’s kind of the invitation for millennia about an invitation to catharsis. This is not real. We have that agency. It’s a cultural phenomenon. Or even in cultures or places where they don’t lower the lights, when a show starts, you’re about to read a book, there’s some kind of inhalation, spiritual — I say spiritual. I don’t necessarily mean religious of one particular faith, but the sort of consciousness of attenuation to nature or outside something. Music is very similar. Because we know those things, we don’t have to say them anymore.
Now, in my field, when you see a white page of a screen and there’s a rectangular box and it says, “How may I help you today,” I can list literally about seven things immediately that are not engendering agency. They’re actually manipulating. For instance, you have to scroll down on ChatGPT at the very bottom below the fold—you don’t see it right away—that says, “ChatGPT may make mistakes. Check your results,” which is not genuine disclosure. When you use anthropomorphism, how may I help you today, there’s things where people are like, “Oh, we’re used to these tools.” I’m like, we aren’t. We aren’t. I mean, if you’re 56, people my age, you are because of your intelligence in your age. Kids aren’t. Young people aren’t. A lot of people aren’t. They see, “How may I help you today?” And so I bring all that up to say the spiritual side of the loss of agency, the other example I’ve been thinking of is like, if you walked into a room and there are 12 people and someone had a sign saying Buddhist, someone had a sign saying
Christian, someone had a sign saying agnostic, and someone had a sign saying Deweyan—you mentioned John Dewey because of my wife. Whatever—words that were symbols that you thought might be attractive or not attractive, but you stood there and that choice to look around, I’m deeming that agency and modernity. Whereas right now, there’s a legal term called a term of adhesion, which is a legal term, which means one can’t operate except in the world into which one is given or invited. And so, to say to someone, “Well, you don’t have to use these tools,” is like saying, “You don’t have to use the Internet.” It’s not just that it’s unrealistic and inaccurate. It’s manipulative, and it’s part of a larger design, where giving someone tools—many of which I’m still trying to create. There’s a standard at IEEE called 7012. I can tell you about it in a minute—is not me trying to tell Brandon or anyone else, “Here’s how you should feel.” But it’s essentially saying, “Hey, when you go to that web page for the first time, that white page, they have the opportunity of those designers and that company to have a first-time cooking structure. Boom.” First time at ChatGPT, you know these things. It’s very simple. And to say they didn’t know is absolute—if I can swear on your show—absolute horseshit. There’s so much from Stanford’s behavioral economics and whatever else, like design 101.
When you know about what works in terms of manipulation and do it to get your tool out in the world and then later go, “Oh, I didn’t know,” then that’s either ignorance at the level of massive irresponsibility, or it’s pre-awareness, obfuscation by design. It’s the type of thing I’ve been fighting or trying to remedy for years. Where what normally happens — Sherry Turkle has said this, another hero of mine who wrote the book, Alone Together. She’s one of the leaders in the space of kind of awareness and understanding how to use the tools well. Once you tell someone, “Hey, here’s these things you disclose, A, B, and C,” are you aware of the anthropomorphism on however you describe that to someone said they’d get it, and you give them agency? So, in that example, you come back the second time and you know those things. She has a great quote, which she says in terms of loving. At the time when she wrote the book, it was about loving robots. She used the term, “We’re ready for the romance. Humans like being manipulated,” or I should say they like being told stories and the morals. But if you don’t give them the chance to even have agency, then you get into — eventually, from the business standpoint, lowest common denominator, you’re harvesting their information. They’re not going to be useful anymore. Then more importantly, from a larger way that I live my life is, every person has worth. Every person should be given the human right, legal right, to make their own decisions. Right now, the answer is: they absolutely don’t.
Brandon: Yeah, I mean, is the core issue here around the anthropomorphizing? I mean, if you had LLMs that, say, gave you promise in the third person saying, “Bot 365 is ready for your questions,” as opposed to, “How can I help you,” does that take away the problem? Part of my understanding is that the anthropomorphizing is helpful to create a sense of attachment to this entity, right? Do you think you can build a relationship with it, it becomes addictive in the same way that Facebook and all these other things are? I mean, is that part of the issue—that there’s this illusion that’s created by the sort of first person?
John: Yeah, I mean, that’s kind of the core reason. Your use of the word ‘entity’ intrigues me because I like to use the term ‘systems.’ Now, that said, I try to avoid telling other people how they feel because that’s not really relevant. Meaning, I might have misinterpreted you or anyone else, but they can just tell me. Because there’s a lot of people who believe that algorithms now or the systems comprising the algorithms are sentient or alive. And in the same way I believe in Jesus, I’m not here to judge that.
However, the disclosure around the third person, when not given—I know this as a journalist—initially, it used to irritate me to disclose with certain things. Because I’d feel like, oh, I’m going to mitigate. I never really wrote where it was like I work for NBC, and I have to disclose that because I’m writing about NBC and they’re getting me money. But you disclose things, and you’re like, “Ah.” It’s like a magician showing their trick. I got to disclose it. I get it. I get that concern. Valid. But then, when you don’t disclose something and then you find out from a reader that not only they feel fooled, but they think there’s legal issues of wherever else, it’s hard. Disclosure is hard. But in this case, it’s not. It’s not. Everybody knows. I mean, when ChatGPT first came out, I was the ethicist. Like, “Stop using I. Stop using I.” It hasn’t really changed. Then what happens is, you’ll get the language. Like you just said, this tool uses third person in an effort or whatever. Usually, within two prompts, it will say me, or, I, or we again. That’s in English. I only read a little bit of Spanish, so I have no idea what the nuances are. But even the phrase ‘natural language’ is misleading. The reason it is such a big deal is because when I read — I don’t know. I can’t think of a good analogy. I get lost in books. I read different writing. I know I haven’t written it, but it feels like I could have. But that separation of, like, now I’m reading something by Brandon, or I’m reading this book about Marshall McLuhan and whatever it is. I’m really into it. I know the difference. When I write something myself—like I said, I read stuff I wrote years ago—I forget that I’ve written something, but I can trust that I wrote it. Then I see the citations when I’m quoting someone else. Because I don’t remember. Then my friends are like, yeah, you probably are copying other writers inherently the same way as a musician. You copy BB. King’s Licks or whatever else. I think there is this like an homage logic. But also, I can’t copy Emerson. If I quote him directly, that is called theft, you know?
So all that is to say, like, the anthropomorphism is just one of the tools, but it is kind of the biggest one. Because I think it’s a spiritual issue. I don’t mean Jesus, or Allah, or whatever. I mean, you go to that white page. It’s very ritualistic, very stage-like, like a proscenium, and where all of modernity, you and I have grown through pre-internet days and all of that, all these signals of going like this and picking up this thing. You and I have all these signals that became such a part of who we are. That now, when a kid, a young person, opens a screen, it just sees that box and “How may I help you today” is the entrance. Then anthropomorphism, anybody is saying like, “Oh, it’s natural and anthropomorphized.” It is. But from a design standpoint, when that’s known, not disclosing is an overt tool that is harmful, you know?
Brandon: Yeah. Let’s talk about values. I mean, that’s one of the key points you make in your book. It’s that it’s really critical to explicitly codify our own values to shape both AI systems and our flourishing. I mean, there’s an approach to values clarification, which is sort of, in my sense, a way to simply track your individual preferences, whatever they might be. That implies a certain kind of relativism, right? So then you’ve got your values; I’ve got mine. But that doesn’t seem to be the kind of thing you’re talking about it. My sense is you’re borrowing your friend Constantine Ogdensburg’s theory of values dissonance, which suggests that unhappiness results from not living up to our own values. Could you say a little bit about why it matters that we recognize the values that are at play in our own lives, and why those values need to be codified into our AI systems—not as an afterthought, but in their design?
John: Sure. I’m glad you mentioned his name, Constantine. I remember I haven’t seen him for years, but he always gives a compliment for me. He was one of the geeks for my 2014 book. I also interviewed about quantified self. A lot of times, people are like, “If you scrutinize yourself too much with all these different tools, you lose the beauty of your life and all that.” I find, more and more, it’s a test. It’s a certain amount of time. It’s a way to recognize what we care about. So in his case, I think his memory serves, he wanted to spend more time walking with dogs. He said, “I find joy with time with my partner, walking my dog, being in nature.” He did that by just taking — I think it was a month, any longer. Not just journaling, but really scrutinizing everything he did. This is the aspect of the sort of advertising regime around our actions, the surveillance economy, that when shared with a user in a positive way is beautifully illuminating. Hey, sleep app, this app, relationship app, whatever it is, sentiment analysis, emotional awareness, looking at your facial cues, all these things—things that we just don’t see because we’re not built that way. Then when they’re aggregated with insights, they’re wildly helpful. That’s what he taught me. Then the values work that’s in the book, some of it is pretty simple, like spending time with your family. I joke about this a lot. But it’s like, no one I know comes back from a weekend at work, “How was your weekend?” Well, great. I was more efficient in my time with my kids. Three weeks ago, I spent four hours with them. Last week, I only spent an hour. But I maximize my love time with them, right?
I’m glad you laugh because it’s sort of supposed to be ludicrous, but it also shows — not from you, Brandon. But we, as humans, are not supposed to measure those things. Yet caregiving is the main part that’s left out of the GDP, gross domestic product. Caregiving is pragmatically meaning certainly acknowledging women, children, and nature. Why some of the major reasons we’re in the anthropomorphism, that our planet is suffering so much, because we don’t measure caregiving. I’ve had a lot of people over the years say, like, “Well, if you measure caregiving, it’s going to harm it and mitigate it.” I think it’s the opposite. I think, first of all, isolation from COVID, and extending now with these tools—a lot of these chat bots, et cetera—isolation tends to increase with a lot of use of certain chatbots. Noble as the aspirations may be. I think there, in terms of back to your values question, I think for me, at least, one of the hardest things about values for me is also saying, does it work?
I wrote a book on measuring your values and then got divorced. Now, I’m not going to talk about my divorce or my ex or anything like that. But I certainly wondered, “Do I have credibility talking about emotions or whatever else?” The short answer is, I don’t know. I think credibility has to do with the person looking at me. I can’t make myself be credible to someone. But if I try to mention, as I will hear, that in one sense, I wish I hadn’t gone through the divorce mainly because of my kids. But then I wouldn’t have met the love of my life, Gabrielle, who I’m married to now, and I wouldn’t have gone through an experience that I would wish on nobody. Divorces, more and more, it’s interesting like you watch TV shows. I got divorced twice. Everyone’s journey in pain is unique and different. But at least, for me, categorically, if someone was like, “Would you prefer to get shot,” yes. “Do you want to lose a couple of fingers?” Yeah. Because the pain is so lasting, and there are so many aspects of my values that I now have in question. I wrote a book on values about, did I do something wrong? Because I was in a situation where it seems like I wasn’t doing the right stuff.
Anyway, and so there, I’ll say that now I’m at a place where I come back to that quote, “The inmost core of reality is love.” Because at least, for me, going through that meant my ideas around values or tracking values useful as I still think they are. Harder the value ultimately that had to go through from me was my faith. Meaning, do I think that God—in my case, Jesus—are real in a way where that experience happened? Do I feel that my faith fundamentally is kind of what kept me from going unhinged? The answer is yes. And so there, in that sense, today, hopefully not sound like I’m proselytizing, but share that an examination of values where one says their faith—capital F, small F, whatever—is when they get out of bed in the morning, how they recognize they can keep going. That again, goes back to love for me.
The final point I’ll make in this answer is: I’ve written a diagram on LinkedIn about that statement, “The inmost core of reality is love.” Because as a geek, I’m like, is that the web reality that you and I are on now? Hypertext? Is it virtual reality and augmented reality, which I’ve written about a lot? Is it the spatial web? Is it a new set of protocols? Is it GenAI? Is it data? Is it our dreams? I think about dreams a lot. Being 56, you wake up and you’re like, “Oh, I had a dream.” It wasn’t real, but it was. Something happened in your brain. You woke up, and it stayed with you. So that’s a reality. Death, life, spirit. I love that Karl Rahner said this. “If every one of those, the inmost core—I rightfully phrased that—is love, then seeking that love as a value is probably the core of what I’m trying to do.” Because I guarantee, John, especially in the last couple years, you’re not the guy you want to be like. How do I live by following values? Unless it means finding someone as amazing as my best friend and wife, and also leaning on her sometimes too much in terms of love. But if that helps. I think I really appreciate you’re asking the question because a big part of the book—which is definitively the same, which I want to make sure to say to your viewers and listeners—is me, John. I know. I won’t just say I believe. I know that every single person in the world has worth inherently. Because you breathe. And so, asking what are my values, if no one has asked you to ask that of yourself, you are worth the time. The book has some examples of values, things like family, work, et cetera, where you can start to ask yourself: what is my purpose for the world? Am I living to these things that I think are bringing me value? And if you test and you know that they are, amen. And if you test and you realize I’m stressing myself doing 70-hour work week, and I think I’m losing time with my family—which might be harming me ultimately—then my answer, especially from my position, is: take the time you need now.
Brandon: How do you see these values? I mean, once we recognize what these are, I suppose I have a couple of questions. One of them is whether there are certain objective values, let’s say, for lack of a better word. Is love perhaps a core value that all human beings ought to sort of make a priority? Or can some people say, “Look. My values to sort of minimize discomfort or something of those lines, or maximize technological progress or something, and that’s okay, right? There are some people, I think, who would say that, that maybe something along those lines is more valuable to them. Maybe receiving love from another human being is too painful or not as seductive. It doesn’t have that attraction, that power, might, right? And so some people might value power in some form or other. And so how do we sort through that also, especially in the development of these AI systems? Do you think it’s possible? I mean, my sense is you’re saying that you can’t build in values or ethics post-hoc. There are already values baked into these systems, and you have to recognize what those are and build them in right from the get-go. Could you speak to that as well, and how love might have to do with the creation of GenAI systems and so on?
John: Great question. I mean, the systems, the ones that I really feature in my last two books are economic systems. There are paradigms that I think, like I mentioned, the lights going down in a theater, I didn’t really know what they meant until someone told me about Gross National Happiness from Bhutan. That stemmed apparently from a speech by Robert Kennedy not too long before he died, where he talked about the things we measure and the things we don’t. Beautiful speech. I forget what university he was at. Kansas? He said, “We’ll measure advertising, but we don’t measure the time with our kids.” I’m paraphrasing. But it’s a beautiful speech, kind of what I just said a minute ago, where the guy made a great point. What you measure matters than what you don’t.
In the States, how much do we pay teachers? How do teachers feel right now about GenAI and during their classroom? Were they given a certain amount of money to test these tools? Were they given instructions how to test these tools? Are they being kept in schools as humans teaching, or are they being compared to standardized tests or other things which may have value? But where GenAI I don’t think was loving in any stretch was how it was introduced. A lot of technological pools just came into your consciousness and, all of a sudden, it just started invading. If invading is too strong a word, changing. Anyway, economic systems are the biggie.
Then the paradigm of DEI, which our current administration is challenging, this is a tough subject to talk about, I recognize. But it was only in the last couple of years where I started even to understand what white supremacy is. I think at least I’ll speak — obviously, I’m speaking for myself. All this stuff is John, not IEEE where I work, by the way. I learned from a lot of people in my work, from countries not in the West, how dominant Western thinking power is. Silicon Valley dominating a lot of tech narratives and regulation and all that. The EU AI Act is fantastic. That shrug was not the EU AI. That shrug was, worst case scenario, Meta gets fined a billion dollars. Zuckerberg says, “Eh.” He says he doesn’t care about the EU AI Act. So the power structures behind a lot of these tools, that’s the part where I get very depressed and sad a lot. Because the systems are sort of like, “Hey, this gets introduced. You have no choice.” That’s why agency is such a big deal to me. And by the way, it doesn’t hurt. I’m not going to stop using Google. I don’t really use Facebook. I have them for years, but I’m not going to stop using tools. If I’m given agency and permission, especially in terms of having working advertising in PR, that gives a voice back to a consumer or a citizen in ways that we haven’t had for the past 15 years like in Silicon Valley.
So all these systems, like the reason I will just say, they’re not built with love. That’s not how they’re designed. I worked in PR. It’s not a joke, but I will say a lot: no marketing funnel ends in abstinence. Period. You don’t take a marketing class and say, “Hey, Brandon. One of my clients used to be Gillette. You’re obviously a very well-groomed, put-together guy.” So I’d be like, “Oh, I’m going to go after him.” He’s an influencer. What tool? Hey, try out my razor. These are not evil things. I want people to use this stuff, right? But P&G that owns them is brilliant in terms of these ideas of, find someone who could use your product. Do they use the product? Yes or no? If it’s no, then it’s creating awareness. If it’s yes, do they like it? If they don’t, then you’d send them free stuff. Then they have to try it. Then they want you to recommend it to a friend, right? This becomes formulaic for how the entire undergirding of the system works, which is advertising. Google is still an advertising company. That is how they make their most money. They’re not a search company.
So how the design of the tools could turn into love, in my expert opinion, is certainly, first of all, fundamentally about data. People tend to forget that AI systems are built on data—human data. If you went to that page, I’ve already kind of explained this, but there was a genuine disclosure from these companies. “Hey, welcome. We’re the designers of OpenAI. These tools are really powerful.” We know, based on our experience—they wouldn’t necessarily have to say the Stanford teacher who teaches behavior like an honor—we think this is a really cool way for you to learn about stuff. And when you prompt, you’re going to ask questions. You’re going to learn a whole new paradigm with how to get back words. When those words are put together, we can’t guarantee, but we’re pretty sure you’re going to be mesmerized. It’s going to feel like magic. But it’s not magic. Then at different points, if they had that tool and then they regularly didn’t just give me blog post that no one really reads—except the geeks like me and the tech outlets that I used to write for—boom. Hey, user. We’re thinking that we want to get more data. Because large language model needs a lot of data. We’re going to be maybe going after thousands and thousands of books by authors. Do you think we should try to reach out to all those authors and get their permission and maybe even give them some money? What do you think? And I answer. Now, when they come back, they can tell me what the survey was of their users. But then, are they going to use that as justification? They probably might. You asked this before, and I know we’re also getting near the end of the hour, so I’m trying to also stay positive and helpful and pragmatic in my responses.
Brandon: Sure.
John: I think I mentioned in the book C.S. Lewis, who’s one of my heroes, who started off as a massive skeptic for Christianity. I love the skeptics who convert. By the way, Marshall McLuhan converted to Catholicism—one of my favorites in the world. But C.S. Lewis, he says it better. There’s a term that I call “moral absolutism.” I might be misquoting him. But I think he pointed out the example of getting a seat in a bus, if memory serves. Like if I was walking towards a bus seat, at the end of the bus the seat was open, and someone 10 feet away from me sat down, I’d be disappointed. But I wouldn’t be like, “Eh.” If I was about to sit in that seat, like my butt is hovering over the seat, and someone shoves me out of the way—with the exception of the physical pain of being shoved—why do I feel incredulous? It was my seat, right? So there’s a kind of genetic level thing there.
But I think the thing, too, is like working in the AI ethics space, I believe there are moral absolutes around children, for instance. I think children trafficking, I think certainly, whatever, sexual issues. I think not giving parents or caregivers a way—we have a thing in IEEE that I am very proud of—early on, what’s called age-appropriate design. That really is the idea of a wonderful, amazing human in the UK, Baroness Beeban Kidron. It’s basically about creating agency for people taking care of kids and saying, “Look. Whatever the age is—16, 15, 14—what’s the nature of how kids or young people are approaching these tools? Can we empower those people teaching kids for the first time?” My kids are in their 20s now. One is almost 20. I don’t want to give their ages away. But my kids are that age. So it’s completely different for people who have kids now. And so, all that is to say, if someone says to me, “But there’s this culture where it’s okay to beat children” — I’m using an extreme example, but I like to. Moral absolutism. Ultimately, the value of taking care of kids, taking care of nature, those are moral absolutes, where if they’re deprioritized—we have a paper at IEEE called Prioritizing People and Planet as the Metrics for Responsible AI. People can Google that. Proud of that work. But anyway, I’ll wrap up there. Thank you.
Brandon: Well, let me ask you a couple of more questions, if you don’t mind. One is on one of the dystopian themes running through your book—which it seems like you’ve got the first half that’s somewhat dystopian, and the second half that’s a lot more positive and hopeful. But on the dystopian things which build on a couple of things you touched on in this recent answer, you talk about a couple of scenarios. One in which you have this imagined scenario of your daughter dating a robot and the accusation of something like flesh-ism, where it’s possible to imagine a scenario where the difficulty of human relationships leads us to prefer a relationship with a machine of some sort. Similarly, the concerns about creating AI representations of deceased loved ones—a child or a parent who passes away—why not just create a digital avatar to let that person live on, right? Both of these, it seems, are they have a certain kind of beauty to them. They’re seductive anyway. Could you speak to what you see is problematic at the heart of these, and perhaps how they really are not fundamentally in accord with love, even though they might seem to be?
John: Well, I think, first, it’s grief. As an American, maybe as a guy, as an American, grief, for me, growing up—again, I’m 56—plenty of stupid movies, whatever else. Like, guys are strong. So it’s not just about not crying. It’s about avoidance of grief and the rituals surrounding grief or not. So when you kind of see movies still that are like, “Oh, we’re going to bury grandpa and take care of the funeral,” the American grief in general is very dismissive and fast. For a couple weeks, everyone really is sad and being general hyperbolic. Then the person’s gone, and then it’s the people who have to do with it. When my dad died, for an entire year, I mourned. I wasn’t sure how to call that all the time. So I think, certainly, one thing is just the fact is: if you ignore grief, you miss a lot of the human experience that most people face all the time. And if you bury it, I think it’s going to be harmful no matter what. Meaning, taking on a parent in an AI form.
I forget the book about a guy—this came out years ago—who did that with his dad. It was a very helpful book. Because ultimately, what he realized in the book is what I think most people will find to realize. The New York Times covered this recently with another guy who filmed his father. The AI version of your loved one is obviously not going to be them. So you will use slash we will face the reality of a whole new thing, which is, this entity — I’ll use your word earlier, and I think that’s appropriate in this regard. This new thing, not really a creature, is mimicking aspects of a person I love. But because of hallucinations, errors, combined with synthetic data or whatever else, it’s just not him or her. So are you avoiding the grief that you’ll need to face anyway, versus like, “I have recordings of my dad that I filmed him telling stories about five months before he died.” They’re very hard to watch. But now I’m at the point, since he’s been gone for almost 10 years, I can watch and have sort of a smile along with the tears. There’s that. Someone can say, “It’ll work for me,” or, “It works for me.” I read people, again, saying, “I’m in love with this chat bot. I have a version of my dad.” That’s where I’m really challenged. Because it’s easy for me to say anything. No, you weren’t. They are. I can’t tell someone the subjective truth.
Then there’s cultural, like Japan, animism, and different indigenous traditions where this very beautiful, to your point, like loving spiritual aspects of things like this. It’s just that it’s so new, A. Then B, by and large, it all comes within the paradigm of surveillance capitalism that is sort of like a white supremacy. Different in a lot of ways, but not in others. Like a power structure that any of these experiences still happen within. So that person who’s like, “I’m experiencing my dad. I love it,” I feel like a jerk. Because I’m like, “Hold on. You feel what you feel. But let me tell you about all the other things.” I’m like, that’s going to make him feel better about losing your dad? But there is a possibility. There is a possibility of the data agency, as I mentioned. Then we will experience these things versus the value that I didn’t mention, the metascrial bundle paper, the reason it’s eugenics is: a lot of the leaders of GenAI who she quotes are utilitarianists, where their logic and belief as an ideology is, “We are only made up of our consciousness and our cognitive selves. Let’s put ourselves on machines and go into space.” Or we’ll be on Mars, you know. But this is while they dig their burrows, and Zuckerberg has his massive island underground in Maui. I think that when utilitarianism says we think the future of the human race is X, but they know that the planet today and the majority of people, their action are accelerating the larger loss of life, then for me, I’m happy. That’s immoral. There’s no way to say—from a deontological, certainly a virtue ethic standpoint, that the loss of 8 billion or more people in the planet—don’t hinder innovation. That’s eugenics. Let’s call it what it is.
Brandon: Yeah, and it seems like even the extension of that logic too is, at some point, it might become seen as irresponsible if you don’t upload yourself into an AI mind clone or something, right? And so the way in which the logic of these technologies develops brings with it some kind of value system, which I think you’re right to say, is there is a deep-seated eugenicist bent to it that a lot of us aren’t seeing. If you could speak just maybe very briefly about the solutions you proposed. Because you mentioned virtue ethics, but you draw a lot on what you called this gap solution of gratitude, altruism, purpose. I mean, how might cultivating virtues actually help us better live out this future driven by AGI? And if you could maybe even add a word about what all of this means for people of faith, for faith communities, what innovations might be needed in churches or other faith communities as they try to respond to the development of AI systems?
John: Well, first, I wrote about this on LinkedIn. AGI is a faith-based belief. I like saying that whenever I can. Because technically, it’s speculation. There’s no agreement, whether it’s Tintin, who’s quoted all the time, or Zuckerberg, whoever. AGI is an idea—Super intelligence, Nick Bostrom, ever since his book came out, right? Because what’s going to happen? There’s no pragmatic explanation. Like, hey, when AGI arrives two years from now, Altman changes it all the time. But what does it mean? You and I wake up, and we get a text? “Hey, I’m Steve. I’m AGI.” Then the other thing that’s overtly harmful—and seriously, I say this not as a joke. But Brandon, you’re a good person. If this resonates, then I think it’s true love. But for your viewers, watch the messages that happen with the constant incessant, The Medic. This is why I love McLuhan so much. What are the messages when those three letters happen with AGI? Guaranteed in English. The words that you’ll see: race, competition, when we’ll lose jobs. Then again—I’m going to swear—the absolute horse shit of what AGI will do for most people with jobs. Newsflash: the tech industry is firing more people because of AI as of late. And outside of whatever severance packages they might get, there’s no long term—at least in the States—guarantee of health insurance or money.
I think that’s a horrible message. Having been an actor and having lost jobs a lot as an actor, and then getting let go a couple of times, you lose your job. I’ll talk about myself. I lost my job, and I’m going paycheck to paycheck. Then debt builds up and all that type of stuff. There’s knowing. It’s like, “Hey, hey, hold on. Wait a second, buddy. Here she is, Sheila. AGI Sheila, she’s going to be there for me, buddy. She’s going to write you a check.” Because then we talk about universal basic income, which I’ve written about for years. These are all solutions that are just words. They’re interesting, but I have been going to 11 years or so of conferences on these words. When you lose a job, when you lose a marriage, who is there to help? Who is there to give money? A lot of times, it’s faith-based institutions. By the way, Alcoholics Anonymous is another wonderful faith-based institution that’s not about — they didn’t even know a lot of people think it’s God and Jesus. It’s really not. I’ve been to a couple of AA meetings. I have never felt — well, a couple times, but community of strangers more than when I thought. Maybe I’m having a couple too many glasses of wine dealing with my divorce.
I’m just trying to say these things to be real, Brandon, because I appreciate your work on beauty. But gratitude, altruism and purpose, the gap thing, gratitude is really hard, especially when one is going through something really hard. In my case, not to dwell on it, but mostly, divorce was so isolated. Normally, I’m very grateful for my kids and stuff. But I recognize how hard it is to be a parent around kids when you want to be needy and have them do the work that they’re not supposed to do in general as kids. They’re supposed to be your kids, not your therapist. So that said, gratitude there was for friends, for my mom. Leaning on gratitude keeps you in the moment. That’s the thing about a lot of faith-based practices: Buddhism, meditation, swimming. For instance, when I was really suffering, swimming became something. I had read this wonderful book. Her name will come to me. The title of the book is something like When Things Are Broken. It’ll come to me. I forget her name. She talked about breathing, Buddhist meditation, breathing, and swimming. That’s all you have to do. Pain didn’t go away. It’s just for something you’ve started to recognize.
Altruism, a lot of this stuff seems kind of selfish because it sort of is, but it’s self-oriented in the sense of healing. When you help someone else, in my experience, but the science also says, you kind of forget about yourself. You have these blissful moments where you’re like, “Oh, this is pain.” Right? All the things about grief and pain are like, you can’t get away from your own stuff. Oh, the divorce, whatever it is, right? You help someone else. And for those blissful moments where you see some kind of connection, maybe you’ll help them. Maybe you won’t. Maybe they’re like, “I didn’t need the clothes,” or, “Stop bugging me. I’m fine,” whatever it is. But you’re trying. Then in those moments, people can see, for whatever reason, this human is reaching out to me from this beautiful moment of electric consciousness happened. I do it a lot when I travel. Stuff as simple as like, “Do you want to get in line in front of me?” That like in modernity, like in an airplane. People are like — it’s almost like you’re doing something wrong. Like, “What? Okay. Thanks. Sure. How are you doing?” I love this, like when you’re on the phone with a human and you know it’s a human, whatever. “Hi.” Wells Fargo, my bank, or USAA Insurance, whatever, so and so. “You’re being recorded. This is Sheila,” whatever it is. You’re like, “Hey, Sheila. How are you doing?” That is altruism. You wonder why? Most people don’t ask anybody how they’re doing in those jobs. And it feels great. Sure. People are like, “Are you doing it to feel better?” Like, yeah, because I want them to do it to me. But also, whether or not, it’s still—
Brandon: There is a real connection. It is a recognition of the value of the other.
John: Yeah, and then the purpose is kind of what we’re talking about here. Like, is there a reason for living? Some days, I don’t feel that way. But then most days, at some point, I’m like, I’m so blessed to have friends like Brandon. I’m doing amazing work, my kids, whatever else. That gap logic is helpful because it gives you a sense of recognition for your own worth. That’s the gratitude. The altruism kind of keeps you focused on others. Then purpose, usually, by that point, the purpose is pretty evident. But where you can pursue work that brings you joy, wonderful.
Brandon: Could you say maybe one more word about just on the theme of what faith communities could do? Because I think this is another area where we have to really think about, “Well, who is really the best position, perhaps, to shape the way in which we approach the development of these systems, the way in which we live with them?” I think there is a danger in which we find even faith communities will simply be chasing after the latest fad so that they don’t feel left behind. But is there an innovative role that they could play in a society that becomes increasingly automated and driven by some of these logics we’ve talked about?
John: Definitely. Well, now it’s been maybe a year. Wow. Time flies fast. I joined as a volunteer expert to a group called AI and Faith, which I’d recommend your folks check out. I can introduce you to some of the folks there. The guy who started it, his name will come to me. David? It’ll come in a minute. But he started it years ago. What I really appreciate is that he’s not focused on the one faith. It’s not a Christian organization or a Jewish organization. There’s a lot of opinions from transhumanists. By the way, that’s a very general — it’s like saying Christians. It’s such a broad group of people. I learn a lot from transhumanist friends. It’s the ones who then, like any group, cross the ties with force politically. But they’re like, hold on.
I worked at a church for years when I was an actor, Trinity Baptist Church on the Upper East Side in New York. You learn a lot about faith-based institutions when we kind of learn how the sausage is made, as it were. Running a church means trying to get people to come, things about tithing—which there’s a lot of issues about tithing when you ask for money. It’s like, well, you’ve got to keep the lights on and charity. But what does that mean? Then when people come, how do they register? You get their data, and then the sort of seasonal side of things. It’s like being an actor. You work when everyone else doesn’t. In faith-based institutions, Sundays or Saturday nights or Fridays or whatever, you’re working as it were. It’s work. So I think there, first of all, a lot of times, faith and community, those two words are interchangeable when you say blank-based institutions. And so when there’s any opportunity to have people come together around a shared communal, positive communal offering, I think that’s a fantastic community to go to and say, “How are you using these tools?”
What’s interesting is, I did go to an event in Dallas—a very helpful event, a lot of amazing people—with a group called Missional AI. That is a Christian-based organization. And I’ll be honest. As an ethicist type, I was freaked out. It’s not about the people there. This is not about Christianity. But I can’t tell you how shocked I was at how many people just put-up screens that essentially said, “Hi, I’m so-and-so.” But then it was related to, “Hi, I’m Abraham,” or, “I am Jesus.” Not got to avert. But basically, they didn’t know about data, or disclosure, or whatever else. And so I appreciate the different groups I’m involved in that’s using my first message. It’s like, listen. It’s hard enough for me when I know people don’t understand anthropomorphism for just a general-purpose tool that they use for everything. When the first introduction to not just a church, but the potential for any faith is, hi, I’m blank, then insert all the scary part of them is where all the scary stuff that could be there.
But then the other part is, if someone is led to think that the tools themselves are spiritual. And again, the people proffering those tools may believe it. So that’s a different conversation. But where there’s sort of an ignorance, not out of stupidity, but just like, “Hey, let’s use this rapper. ChatGPT gives this rapper. We can build this tool off of it,” I’m like, I sat there. I sat for four or five years. Not that long even, but as an office manager. People were like, “Hey.” They came in off the street, needed money, needed food, came in on the weekend. We’re a million dollars on Wall Street, right? Your connecting with humanity is uncomfortable.
Then also, it’s a reality of that I believe having been in the biz and having proselytized in high school and recognized why that isn’t, can be ‘effective’ but is not genuine. It’s that ultimately, it’s like you share tools around what someone can believe. You share scripture, if you want to call it scripture, depending on the edition. You share whatever. But ultimately, it’s these communities, I believe, coming in place of love—where the love also is partly, “Hey, we know eventually you’re going to make your own decision.” Or it’s kind of a cult. When I first came to New York, I remember it was tempting. Somewhere as an actor, a very attractive woman—Christine, I think is the name—was like, “Hey, do you want to come to a party?” She named a church. I won’t name the church because it’s still pretty well known. I’m like, “Sure, cute girl. I’m doing great. I just came to the city.” I went to the party. It was essentially kind of a cultish, evangelical side of Christianity, where people were like, “We want you to get baptized.” And I’m like, “I was baptized.” They were like, “We got a bathtub filled with water. You’re going to go get baptized.” Weird music playing, wine-like drinks but not wine being packed. It was just all this stuff where I’m like, “Hold on. I believe in the guy. I went to college.” They were like, “Hmm.”
Anyway, so all that is to say, like, when community is an imitation and that blissful blessed moment happens—AA is one of the places I felt this the most—wasn’t a we have a solution for you because you’re broken. AA, the thing I felt so good about was like, the good is not the right term, but healing. It was a given, and this is what I take from when I read New Testament scripture. It’s that we’re not broken like we’re less than. When we make mistakes, it doesn’t mean we’re fallible or evil, even if we do “evil” or sinful things. The phrase, “All have sinned and fall short of the glory of God,” I think you can either get into like, oh, flagellation and whatever else. I’m not trying to mock any religion or faith-based belief. But it’s a sense of like when I mentioned that story of Jesus and the woman at the well. Most of what I take from that is a sign that, no matter what we do, we are loved to the point where, I believe, positive view of — I’m going to use a father term, but I don’t need a gender. But Yahweh, Daddy, Abba. There’s this entity in the universe that loves us so much, that we have free will. You have the opportunity to love others as God loves us. In the midst of knowing, my belief. Perfection is a strange term to use anyway, unless one is perfected as it were in the act of trying to love, and in the act of recognizing the need for love and where usually one of the other person is in more of a state of the need of that love, and one person might be more able to give that love.
And so I think their faith-based — this is what I love AI and Faith and other organizations are trying to bring these conversations into work setting. Not to proselytize for specific faith as it were, but to say, if there are these conversations and ideas from around the world, Ubuntu ethics, whatever, that are not normally brought into business setting, then that’s how a lot of the issues facing not just GenAI, but the GDP thing won’t change. Because it’s the faith-based institutions usually that are saying, “What about caregiving? What about love? What about community? If we aren’t measuring those things in our day job, maybe it’s time to change.” So faith-based groups, I think, maybe have the biggest way of saying, “Hey, while we’re talking about AGI and our consciousness being put into boxes and going off, can we also talk about Allah? Can we also talk about Buddha?” And if the answer is no, no, no, then I think that’s a really good indication to say, this conversation is not going to bring innovation to human. It’s going to do the same stuff we talk about all the time. That especially goes for indigenous cultures, for women, for people who normally often are on their tables like these.
Brandon: Right. Thanks, John. Where could we direct our viewers and listeners to your work, to IEEE’s work on this?
John: Well, thank you so much. I’m pretty active on LinkedIn for all of its problems and positives. I use my middle initial, C. So John C. Havens. So you have it written here. IEEE, if you Google I with three E’s, and then letters AIS, you’ll come to our main page that has a lot about AI and ethics work. A big compendium book called Ethically Aligned Design is one of the core things I help drive. About 800 people took part in that over the course of three years. Then it lists the standards that we work on. Then the other project that I’ve been working on the last few years is called Planet Positive 2030. And if you Google IEEE Planet Positive 2030, those are my two main aspects of work at IEEE. Then thank you. My book, my last three books are on Amazon. So if you type my name, Heartificial intelligence, Hacking Happiness, and Tactical Transparency are my three traditionally-public books.
Brandon: Great. Excellent. Thanks, John. It’s been a delight. I’m very grateful for your time, for your wisdom.
John: Well, thank you. And I’ll end with saying what I started with. I really appreciate your work on beauty. You’re taking such a beautiful, unique angle for a word that, I think, at least for me, is oftentimes too put in a box. So thank you for expanding the paradigm of that idea of beauty with your work.
Brandon: Well, glad that it resonates. That’s the goal. Yeah, thanks, John.


