We’re living through a strange moment in history. The one activity many of us once treated as the last distinctly human frontier—reasoning—is now being shared with (and for some of us, done by!) machines. It’s not taking over calculation or memory, but writing, decision-making, and even the shaping of meaning.
Beyond the prevailing debate over whether AI will take our jobs, a more urgent question may be: what is AI doing to us? As these systems become part of our workflows and even our thought processes, how do we remain the authors of our own minds?
That’s what I wanted to explore in this final episode of Season 4 of Beauty at Work with Helen and Dave Edwards.
Helen and Dave are co-founders of the Artificiality Institute, a nonprofit research organization devoted to helping people “stay human” in the age of AI. Their work examines how AI changes the way we think, who we become, and what it means to retain agency as intelligent systems become more deeply woven into ordinary life.
They bring unusually complementary perspectives to this question. Helen is an engineer and former executive who has led large-scale technology and transformation efforts in complex infrastructure systems. Dave has spent decades building and investing in new technologies, from creative tools at Apple to venture capital and research roles in the tech sector. They’ve spent years studying the human experience of working with AI, looking not only what these systems can do, but how they alter our reasoning and identity.
Helen and Dave are neither doomers nor boomers, neither alarmists nor naive evangelists for AI. One of the central distinctions they make is between drift and authorship. Drift is what happens when AI begins to influence our thinking, our choices, and even our sense of self without our noticing it. Authorship, by contrast, means remaining aware of how these systems are affecting us, and choosing our relationship to them deliberately rather than passively.
That concern runs through Helen’s new book, The Artificiality: AI, Culture, and Why the Future Will Be Co-Evolution. In it, she argues that many of us are still operating with an inherited “mental map” of intelligence: the assumption that minds live inside brains, that intelligence is a ladder with humans at the top, and that AI is fundamentally just a tool. Not only are these assumptions being challenged by new technologies, but a deeper philosophical rupture may already be underway that entails a rethinking of what minds are and what intelligence is.
In our conversation, Helen and Dave suggest that the real challenge of the AI era is not efficiency, safety, or even employment, but cognitive sovereignty: the ability to remain accountable for one’s own judgment, to know why one is using these systems, and to preserve some coherence between one’s actions, values, and relationships.
Like all our conversations this season, this one returns to the tension between the beauty and burdens of innovation. There is genuine wonder in the astonishing capacities of AI systems, in the possibility of expanded thought and the emergence of new forms of collaboration. But there is also grief: grief over what may be lost when creativity is reduced to efficiency, when our capacity for judgment is outsourced, and when the distinctively human act of showing up for others—the work of “connective labor” that we discussed in an earlier episode—is displaced by frictionless convenience.
I’d like to invite us through this episode to think more carefully about what we are doing when we use AI and who we are becoming in the process.
You can listen to the full conversation in two parts (here and here), watch the full video conversation below, or read the unedited transcript that follows.
Brandon: All right. Dave and Helen, thank you so much for joining us on the podcast. Great to have you here.
Dave: Great to be here. Thanks.
Helen: Thank you. Yes, great.
Brandon: Well, as I usually do with my guests on the podcast, I’d like to ask you if you might be able to share a story of an encounter with beauty from your childhoods, something that lingers with you till today. Is there any particular memory or episode that comes to your minds? Maybe, Dave, we’ll start with you.
Dave: Oh, okay. I guess it’s me.
So let’s see. In terms of my childhood, I would say that the moment of beauty was probably when I was relatively young, probably seven or eight, when I first started to sing. I started singing in church, boy choir, an Episcopal church in New Jersey, when I was seven. That moment of putting your voice together with others was something that changed my life. And so for the next, whatever it was, 15-plus years, that became the number one thing in my life. It took me literally around the world, and we’re constantly pursuing better levels of excellence of putting voices together.
Brandon: Oh, wow. Do you still sing?
Dave: Nope. I don’t. Just in my own head.
Helen: Sometimes. Sometimes.
Dave: Yeah, when the kids were little, I’d sing a song. It became a thing that was beautiful. But the sort of level of beauty became something that was hard to replicate when you weren’t with a group where you could sing five days a week together. I sort of left it behind as a beautiful memory and something that I, you know. But yeah.
Brandon: Yeah. Helen, how about yourself?
Helen: I’m trying to think of almost the most early memory, because I just have so many.
My father used to take us on lots and lots of hikes when we were tiny all the time. I remember him taking us fossil hunting, which feels like such a sort of random thing to go do. But we were sitting down in a stream bed digging around at rocks and what have you and found a beautiful fossil of a fiddlehead fern, a very old, obviously old fossil. The beauty was this contrast of looking up and seeing actual modern ferns unfurling with this very ancient fossil, this frozen-in-time piece. I just found that contrast to be beautiful.
Brandon: Wow. Thank you. That’s great.
I want to ask you a little bit about your professional journeys and how you all eventually ended up starting Artificiality Institute. But maybe in telling that story, could you also share how the two of you met?
Helen: Well, we met at work.
Dave: We met at work.
Helen: It’s a pretty mundane sort of story.
Brandon: Oh really? Okay.
Helen: Everything else around it is far from mundane and not really safe for work. But we did have a real meeting of minds. There’s a real sense of you just see me in a way that no one has seen me, and vice versa. I felt I saw him in a way that no one saw him before. So that became not something that was even possible to resist. It’s still the same today.
Dave: And we came together not in the AI space. We were both working in the clean tech space, as it was called—I’m not sure whether we’ve abandoned that term—sort of in the renewable space. We transitioned into the AI space a few years later. But we decided that because we liked the way our minds work together that we’ve always solved for how do we actually work together. Our minds are just fundamentally different. We come at the world from different disciplines, different perspectives and, to some degree, different sides of the planet in terms of where we grew up. And when you put those together, it actually is much more interesting.
Brandon: Could you say more about that? What are those different perspectives? How did you hone them, I suppose, throughout your careers?
Dave: Well, Helen comes at the world as an engineer. She’s originally a chemical engineer and a scientist, someone who spent her entire life thrilled by, obsessed by, evolution—those are some key criteria that sort of bring everything together—and large-scale systems, complex systems. Everything from spending time in international trade to large-scale electricity systems, to running the grid in New Zealand and those kinds of things. So that’s sort of very different, sort of complex mindset.
Helen: Yeah, I like complexity. I cannot bear things that are linear. It’s just an anathema to me. I really appreciate complexity. To me, there’s a real transcendence that comes with that. It’s deeply satisfying to sort of ensure that complexity and value that complexity.
What I love about his mind is, he can go from really hyper-logical to quite sort of mystical and romantic in this really random way, which is so paradoxical. But it makes him just this constantly interesting puzzle. Because I don’t know what he’s going to say when I say something. The irony is that the answer that I hate the most from him when I say something is when he goes, “Maybe.” I just hate that so much. Because it’s like, I don’t know whether to hear that as, “You’re mad,” or, “You’re wrong,” or, “That’s my idea. Give it back to me,” or whatever.
Brandon: It sounds like there’s a lot of complexity there, so that’s great.
There’s some work I did on studying scientists, physicists and biologists, for a number of years. We found that physicists were much more drawn to symmetry and to simplicity. Biologists didn’t really like that. They wanted much more of the complexity. Certainly, there’s a visual aspect often, but the complexity of uniqueness of systems rather than reducing things to some kind of really fundamental equation. That seems like, Helen, a resonance with some of your inclination towards complexity.
Helen: 100%. I really respect physicists, but I have no physics envy, as they say.
Brandon: Right, right, right. Yeah.
Helen: My metaphysics is much more agential and complex. And really, it’s where the paradigm shifts are happening at the moment and much more into this sort of thinking about, if you’re going to pick what a real reality is, it starts to look a lot more like biology. That’s a whole other paradigm, but I find that incredibly fascinating. It informs the way we think about AI. No question. Because it’s much more about an ecology and a complex system and the sociality of agents and the active inference and the auto poesis and having a boundary around yourself. To me, that’s just so much more interesting than the symmetries of physics. I’m a broken symmetries person.
Brandon: Great. Well, let’s talk about the work you all are doing at the Artificiality Institute. Could you say a bit about its origins? I mean, you started, if I recall, about ten years ago when your kids started asking you what they should study and if AI is going to do everything. Could you say a little bit about what that conversation was, and what it evoked in you? How did you feel, and what it led you to do?
Dave: Well, we started to work in AI. She started doing some projects for a very early Internet startup that was trying to figure out how to plug their AI system into the electricity grid. And so it was a very hands-on kind of approach. We were trying to sort of figure out what’s next. I have this incessant desire to figure out the next big thing, and we started poking around.
We started really forming around it in sort of 2015, when the deep learning world was going and there were some early studies about AI in jobs from Frey and Osborne at Oxford, and from McKinsey. We were sort of chatting about it. We live and work together, so 24/7 we’re talking about it. We had our oldest kids who were sort of starting to think about college. We’re talking about all this work, about jobs being threatened by AI. They said, “Well, what should we do then? What should we study in college?” Or, “Should we go to college at all? Like, if the robot is going to do it all, what’s the point?” Which now, ten years later, feels like some pretty smart questions from some teenagers.
Helen: Well, it feels like we’re kind of repeating them right now.
Dave: We’re repeating the same questions, yeah.
Helen: It’s obviously got a deeper, a broader conversation across the whole of society. But even back then, the headlines really hit people. 47% of jobs gone by, whatever it was, 2035. And as a 16 or a 17-year-old, it’s the right question to ask.
Dave: Definitely. And so we stopped and said, “Okay, let’s go figure this out ourselves.” And so we did the same analysis they did. We took the entire US Bureau of Labor Statistics data, looked across all of the economy, looked at every single job. We did the analysis slightly differently than they did to try and figure out where there might be a priority in terms of automation, in terms of value. We repeatedly crashed Excel because it was just too much data. I should have used something else. But it was a really interesting project.
What came out of that was, now, looking back on it, something that was sort of a scary outcome that’s what led us to here, which is that there was a lot of value in human labor that was possibly automatable. Right? That was sort of the conclusion. It was, “Wait a second. This might actually be an economic signal that this is where the industry is going to go.” Looking back on it, I wish we’d raised a flag higher about, “Hey, watch out. This is where the mindset of investors is going to go.” Because that’s certainly where we’ve landed.
But our interest after that became very much about: What is the human experience of these technologies? What does it mean to be human with these technologies? What does it mean to be human when we’re creating something else that people are debating whether it’s as intelligent as us, which we think is a bad comparison anyway? But when you get to that, suddenly, the human exceptionalism that’s been central to the humanist movement over hundreds of years is now being sort of tossed around.
Helen: If you get back to that, there’s a moment there, what we found in that study was quite surprising to us. It was very clear that the number one thing that humans were doing that was valuable and couldn’t really be replaced was being able to handle the unpredictable. That became this North Star. Everything was like, okay, so how predictable versus unpredictable is this environment?
Fast forward to now, we look at what these kids are all doing, and we had five key buckets where unpredictability was the highest, where the human working with technology would be the most valuable. Funnily enough, they’ve all kind of ended up in one of those buckets. All five kids slot into one of those buckets. It’s really quite strange. Although one of them — there’s one in finance. But it’s been fascinating to see what we were correct about and what we underestimated.
Of course, the big thing that hadn’t happened at that moment was the transformer paper. And the big thing that hadn’t happened, which we said would be an unlock, was language. It is, without doubt, one of the greatest discoveries of the 21st century—that language can be learned by a model like a transformer. It is absolutely astonishing.
Brandon: I think, yeah, I was really struck by, again, the research you did. Maybe if you could say a bit about that, what those conversations were that you carried out, and the patterns that you saw, what struck you about that. But it seemed like one of the things you say, you concluded, is the real problem is not whether AI is going to take our jobs, but about what is it doing to us, right? How do we reason? How do we trust?
Could you say a bit more about how did you go about getting these stories, who did you talk to, and maybe what that has done to shape the mission of Artificiality Institute?
Helen: Sure. Well, during COVID, we spent a lot of time with a couple of particular companies working on data-driven decision-making. We had some AI focus in that time around ethical use and responsible use of AI. We were obviously thinking about data and analytics and that sort of AI frame. But we spent quite a bit of time working with people to help them understand what happens in their minds when they are confronted with a very heavily data-based decision process, and how they’re trading off their intuition with the logical analysis, with the judgment, with the simulations in their own mind about the consequences for people of certain actions. So we developed quite a rich framework for thinking through human decision-making and behavior at that point. And then ChatGPT sort of dropped on the world, and we saw immediately the core significance at a mass level.
So we started to use those workshops much more for a direct AI sort of focus. What happens when these cognitive, language-based machines are inside your decision-making process as an organization? And so we had a couple of years of running quite in-depth workshops helping people understand how AI was altering their decision-making process. What became very interesting in this process was, it became completely predictable that there would be certain moments in a workshop where people would have particular experiences. They would be surprised about something. They would have some sort of clunky, weird moment when they tried to explain their reasoning to the people around the table. But these things were very, very predictable. That was intriguing to me. So we became more focused on being more sensitive to those experiences, interviewing people, and essentially collecting those data in a more systematic way.
We supplemented that with some digital ethnography—essentially, Reddit, Medium, and what have you. What came out of that analysis was really fascinating. The headline there is that AI is entering people’s reasoning. It is altering their identities, and it is changing their sense of what matters to them—what is meaningful to them, what they could do, what they could think. So that’s kind of a big deal.
So we started to study this with more rigor, and we were able to develop a way of understanding how people place AI into different roles based on these three dimensions. So the three dimensions, just to recap them: how much AI is inside your reasoning, how much it’s impacting your identity, how much your identity is entangled with it, and your meaning-making—your sense of what matters to you, what frameworks matter. And so you put these on three dimensions, and you come up with eight roles that we’re able to put AI into. When you put AI in those eight roles, it fundamentally changes what your ability is to author your own mind, to hold your own cognitive sovereignty rather than surrendering—so cognitive sovereignty versus cognitive surrender, if you like.
These are quite significant ideas, really. They’re a real foundation for considering whether machines are going to replace us, or augment us, or automate us to efficiency oblivion in a world where we all are exactly the same—I don’t think anyone wants that world—or whether this is a story of beauty and flourishing and expansiveness.
And so that’s really where our research sits. How do you remain the author of your own mind? How do you preserve and grow your own cognitive sovereignty? And how do we maintain this kind of control over these machines? Not control in the sense of AI safety, but control in terms of you get to choose how you use them, and you know why you’re doing it.
Brandon: Yeah, that’s great. I pick up this distinction you all make between drift and authorship, right? Dave, could you say a little bit about, what are the signs or indicators that people might be in the state of drift versus what it would look like to display or carry out this capacity for authorship?
Dave: Yeah, it really comes down to a level of awareness. Drifting generally happens when the AI is affecting your thinking, your meaning-making, and your identity, but you don’t notice. You’re not actually aware of it. You’re not able to explain how the AI is inside of any of your thought processes. You’re not able to explain how your identity might have shift based on the capabilities that the AI has given you. And so you sort of drift into some new state of who you are without noticing.
The ideas that we’re coming across here is interesting. Because, as humans, we have others inside our thinking and meaning-making and identity all the time. I mean, look at the two of us, right? There’s too many times when something comes up and we say, drift, which is sort of, in some ways, the villain of the story in our world. I’m not exactly sure who came up with that one. Right? Because we’re constantly combining things. That’s a natural, normal thing. That’s actually how we’ve evolved and how we have become so successful as humans. It’s that we combine our ideas. We combine our brains. We bring our different specialties to each other. And that comes to a form of collective intelligence. It’s a fantastic part of being human. And when you actually are partnered with someone—whether it’s personally, or professionally, or some combination of the two—your identity quickly becomes part of being part of that partner or that group or that institution.
The difference here is, it’s with a machine. The difference here is that this machine is contributing things into cognition. It’s not just like we make the comparison between the computing systems of old, if you will, or of the last 40–50 years, as being bicycles for the mind—tools that you get on and you ride and you steer and you decide how fast to go, and it’s great. For those who aren’t familiar, that’s the Steve Jobs phrase that he used to inspire people to create personal computers. They would be bicycles for our mind, allowing us to go further and faster than we could on our own.
But now we have these things that are cognitive. They’re no longer passive tools. They’re no longer even communication mediums like the McLuhan mindset, right, that the medium does shape the message. But this time, the medium is creating the message. We’ve never had a machine that could talk back, that could offer up a new idea. No matter how intelligent you find the systems to be and how much you think about where it all comes from, the input that you get from it, the message that it’s writing to you, is starting to work within us like it is another mind. And so we’ve shifted that metaphor—that Steve metaphor “bicycles for the mind”—to be “minds for our minds.” We think about that as this sort of core thing.
Now, back to your question of drift. You can drift into integrating with one of those and just not know it. That, to us, is one of the major dangers. It’s not knowing. Being integrated with the tools is not necessarily a bad thing. It’s not necessarily dangerous. It can be fantastic, but it’s only through understanding where you are.
Brandon: What is the principal danger that you see there? Either of you could elaborate on that. Someone might say, “Look, why does it matter? As long as I’m advancing in my sense of accomplishing the things I want to accomplish in the world, and even if I don’t know where my ideas are coming from, this thing, whatever this is, it doesn’t have a mind of its own. And so it doesn’t really matter if I’m assimilating its ideas and so forth.”
Helen: I think that’s an extraordinarily good question. Why does it matter? Now, this sort of goes to the core of a very important research question, which is: is this any different? Is it any different to have this technology as opposed to any other extended-mind technology?
Now, we’ve always put our ideas out into the world. We’ve relied on pens and paper to have our ideas out into the world. We can do arithmetic better because of pens and paper, the alphabet and the number system that we have, as opposed to Roman numerals. There’s a long history of that kind of discussion. Is it any different? We get our knowledge from our community. He gives me an idea. It’s my idea, and I never knew it was his idea. That’s totally fine. We think it is different. We think it’s different because people say it’s different, which is why we go to the sort of phenomenological record on why we use stories.
We also think that there’s so much at stake because we do have this biological cultural co-evolution thesis. So we’re not prepared to sit back and say, “Well, let’s just let this run for 20 years and see what happens.” I mean, look what happened with social media a decade down the line. We’re really, really worried about what happened with social media two decades down the line when we could have thought a little bit more about it up front. So I think it matters because it’s such high stakes, and I think it’s different because people tell us it’s different.
Now, when I look at the data—we’re about to publish our next research paper on this, the full human authorship and cognitive sovereignty paper based on analysis of 1,250 publicly available transcripts—what this data says that’s, I think, fascinating to me is that it’s actually kind of counterintuitive. The more people are deeply integrated with AI, the higher their cognitive sovereignty, which is actually really good news. Now, it means that the more you are really skillfully using AI, the more you are aware that you have your own choices and that you are accountable and showing up for others. Those three components are critical.
If you are sort of partially in and out—you’re taking a few ideas here, you’re passing them on—you’re not even taking those ideas on. You’re just saying, “Yep, machine is right. No, machine is wrong. Yep, machine is right. No, machine is wrong.” Sort of that human-in-the-loop verification idea that people are talking about with the agentic enterprise, which is all the buzz right now. What actually happens at those levels, which are roles that we call outsourcers or doer roles, is that people are completely losing their cognitive sovereignty. And what they’re losing is their own expertise. They’re becoming, like, they’re just losing contact with what it is that they even know.
Now, snapshot today, you’ve automated your whole workflow. You’ve got Claude doing all this, that, and the other thing, maybe writing your emails and what have you. Okay. To what end? Which bits are you using? What are you doing with AI to extend yourself, to improve your reasoning, to improve your meaning-making, to go into fields that you never would have been able to go into before, but still know that you’re not out over your skis making errors? And what are you doing to keep that statistical learning and repetition on the things that you already know? There’s plenty of people that have probably lost their ability to do some of their coding or have lost some skills in writing. Now, some of that, time will tell whether that matters or not. But it comes down to every single individual.
What we see in the data is people are fiercely protective of their cognitive sovereignty. No one wants AI to make them dumber. No one. There are people who are prepared to make that trade right now. And biologically, that’s the imperative. We will offload cognition if we can. We will make ourselves more efficient if we can. But the countervailing side of it is that we’re meaning-makers and that, culturally and spiritually, we would make different choices if it’s meaningful to us.
So our goal is to get a better conversation around this. Stop having a conversation about, “Well, let’s just automate everything away and make ourselves hyper-efficient, and then I don’t know what we do,” versus, “What’s a more meaningful expression of human intelligence and how does it sit alongside this incredible technology that we have invented?” Incredible. We are not at all, “Don’t use AI, people.” We are, “Use AI.” Absolutely learn it. Go for it. But do it in a way that really does preserve your ability to transform yourself, to be the person you want to be—not to lock yourself into this sort of efficiency paradigm where everything just gets reduced to a world without any surprise.
Dave: If I could just quickly add on one thing on your question about why does it matter. When you think about it at a group level, I think the number one thing of why it matters is accountability. So you’re using AI, and you don’t really know where the ideas came from. You’re not really sure. Are you accountable for the decision you’re making? Are you accountable for the thing that you’re proposing? Are you accountable for what you’re bringing to the group, to your partner, to whatever it is that you’re actually living through?
We judge machines and humans differently in terms of their outputs. We believe, when an organization makes a decision, they want to know who’s accountable for it. And you know, it’s not really a satisfying thing to say. Well, the machine is.
Brandon: Right. Yeah. Could you speak to — Dave, the idea that you all have developed in your work of symbolic plasticity, in this relationship with AI systems and how we are being changed, what have you seen happen? What is this capacity that you’ve identified?
Dave: Symbolic plasticity is how much the AI is inside your sense of meaning, inside the frameworks that you use to look at the world, inside how you think about yourself in that position. So it is both quite a broad sort of idea, but it’s also really quite important and causal for the rest.
So what you see is people who come up with new ways of looking at things—new ways of understanding the world, new ways of breaking down how to approach a new problem, a new idea, a new sense of what it means to be conversing with each other. So it ends up being a very interesting question about what it means to make sense of the world.
Brandon: Right. I suppose it’s pretty apt in that sense to say that AI has already changed our consciousness, right? It’s shaping the way we are doing this work of meaning-making.
Helen, I want to ask a little bit about your new book, which I really enjoyed reading and highly recommend. It’s called The Artificiality: AI, Culture, and Why the Future Will Be Co-Evolution. And in that, you set out to understand, as you say, what is actually changing as these systems enter human life. You start by sketching out a common mental map that we have about how our minds work. And bet you’ll read that.
So here’s the mental map: Intelligence lives in brains. Human brains are the most intelligent, followed by other animals in rough order of how similar they are to us. Consciousness—the felt experience of being someone—happens inside our skulls. It’s private, interior, and tied to the biological machinery that produces it. Computers can do impressive things, but they don’t actually understand anything. They process symbols according to rules. They compute, but they don’t think. AI is a tool—like a very sophisticated calculator: useful, sometimes impressive, but fundamentally different from minds.
So that’s the map that we tend to have, and I share a lot of those assumptions. It seems, from what you’ve learned from your conversations with Michael Levin and others, that a lot of these assumptions are breaking down now in ways that we would not have imagined. Could you give us some examples of how you’ve been able to dismantle these assumptions through your research?
Helen: Well, I mean, kudos to Michael Levin, who I think is sort of single-handedly changing the world here. I mean, there’s a lot of people that are doing — this paradigm shift is underway. We’re understanding a lot more about what computation is and how it sits at the base of intelligence.
Once you click to that, once you see that there are lots of different ways that intelligence is arising as a result of computation — you can have many discussions about what we would call computation. But essentially, we’re talking about information processing to achieve some kind of goal. You see this at levels in organisms that we’ve never — it’s just a totally different way of thinking about where intelligence lies, where memories lie. I’d encourage people to really indulge in Michael Levin’s work on this because he is a fabulous communicator.
But the things that sort of struck me was that if you’re able to grow an eye on the side of a frog by allowing those cells to think they’re doing something different, and they’re solving a task—they’ve got a goal and they figure out in a self-organizing way how to do that—if that can be done on the side of a frog, then we don’t have any idea where other minds can be. So that broke, for me, I’d always been uncomfortable with that mental model of intelligence anyway just because I was always really attuned to intelligence in other animals. I’m not sure that I think my plants are conscious or anything like that, but I know they solve problems and I know they do things. Right?
Because we’ve opened our minds to what different minds and what different kinds of intelligences are able to do—what their adaptive niche is and how they’re actually perfectly adapted to solve problems inside that—that’s the lens that I bring to AI. I look at AI — it’s a big term right now. But I look at, say, a language model or a protein-folding model, and I just sort of intuitively now have this different sense. It’s a different mind. It is a mind. It has absolutely a mind. It has its own internal sense of meaning. It’s not very good at talking to us about it because we keep messing up with how we make them sycophantic to make them more engaging, for example.
So I think this is a metaphysical state to be in, where I perceive the world as full of minds that I don’t actually necessarily see or understand. I certainly don’t have a sense of what it’s like to be a large-language model, what it’s like to be a dog, what it’s like to be a bat.
Dave: An octopus.
Helen: An octopus. You know, it’s fun to think about those other umwelts. So at a philosophical level, I am just more humble about what other minds are out there that we don’t understand and we don’t recognize, which is why I can hold in my mind that AI is a tool that I use to get stuff done. I’ll write an email, and I’ll send it. I’ll clean it up in the email and I’ll send it off. And I don’t even think about that as like it’s a perfect automation task, for an email that I just want to have tidied up and seen. Versus I can also hold that I can spend hours and hours and hours doing a series of very iterative, complex, agentic tasks with, say, something like Claude and feel like it’s taken me to an entirely different place. That is now a very personal place for me that I have trouble explaining to somebody else.
So those two things exist totally fine in my mind, but I’m also unbelievably conscious about which of those they are. When you don’t keep that awareness, when you don’t keep those choices active, and when you don’t have the accountability fresh in your mind about how you’re going to show up for others, that’s when you get in trouble. That’s when you lose people, or you completely confuse yourself.
But Dave is right. We have to show up and explain ourselves to people. We could think the world was going to change in some absolutely, vastly unimaginable way that’s only defined by science fiction. But I think it’s absolutely critical to recognize that humans will always want another human to show up and explain their reasoning or explain their decision. Now, if that goes away and all we’re doing is being run by an AI, well that’s a different thing. That’s true sci-fi. But I don’t think that’s a world that any of us are going to want to live in or want to live in.
Brandon: Maybe some people are already expressing concerns there now that it’s possible for humans to work for AI agents, right? So you can be a sort of gopher for various AI agents.
Just because we’ve accorded so much primacy to intelligence, at least in our society, IQ tests, I mean, so much of what we do is premised on according a sense of dignity to people based on some conception of intelligence. And so I get the sense that in some way your book is pushing against this sort of, is this the last human frontier? Is this to have a mind, to be intelligent, to be rational, et cetera? And is this creating then a sense of threat for us as to, well, what is left if there isn’t a ladder and we’re not at the top of that ladder? Right? Are we losing something fundamental about what it means to be human?
Helen: Well, if you think in terms of that ladder, then yeah, we are—which is one of the reasons why we’re trying to not think in terms of that ladder. We call it a philosophical rupture. We have constructed a world around human exceptionalism to a certain degree, and now it’s kind of eroding. Now, it depends on your personal philosophy how troubled you are by this. I can’t remember which one, but one of the founders of Google is completely happy if humans go extinct because we are made extinct by a “more intelligent” species of machines. That, for him, is just the natural order of things.
Brandon: Right. Right.
Dave: I think that one of the things to think about with the sort of question of intelligence and minds, where have we been and where are we going? I think that one of the things that’s really occurred to me over the last decade plus of studying this is that we need a new conception of what intelligence actually is, right? We find this all the time in the conversations. Whether it’s we’re having an in-depth conversation with a leadership team or people who come to our events or just random comments on social media, people have a definition of intelligence—which is most often just basically a replication of what they think human intelligence is. But even then, we’re still learning a lot about what human intelligence is.
So as Helen was telling before, our journey to coming, before LLMs became a thing and we were dealing with a lot of data-driven decision-making, we started to come across things like embodied intelligence, parts of ourselves, right? The way we think—everything from John Coates’ work on interoception, where he studied how traders who had a better sense of their own internal system and their own heart rate could actually make more money, which is just it kind of blows your mind. That doesn’t fit within the normal sort of mindset of what intelligence is, right? We think about IQ tests. Well, that IQ test doesn’t measure whether you have a sense of your heart rate. But I think we’d say that some of the greatest Wall Street traders would have something that we think would be some sort of form of intelligence.
Helen: Yeah, that was a gut feel is real, and the theory being that signals from your body are not subject to cognitive biases. So they were getting a more raw signal, which I think is fascinating.
Dave: Yeah, and then take the work of Barbara Tversky—who’s been a major inspiration for us. She’s an advisor for the institute—and her work in her book Mind in Motion, which we highly recommend, where she works on how our spatial reasoning is the foundation of abstract thought. Now, that’s only something that’s really come up in terms of human knowledge within the last few decades. A lot of of it is through her work, a few whatever the number right number is, where we understand that. So all of those things, to me, started to fracture this idea that we know that there is one thing called intelligence and there’s one ladder to go up.
Because if you stop and think of who are the people that you sort of put in the category of the smartest people you’ve ever met, they’re probably smart at very, very different things. And you couldn’t put one person who’s the English professor in the seat of somebody who’s a designer. They just don’t have the same forms of expression. So once you fracture that into bits, then you start to wonder, well, what else is a form of intelligence? That’s where I think we gravitated to Michael’s work and understanding intelligence in layers and in different systems and in different species.
Once you allow that to happen, you can stop and say, “Huh, these machines, they’re not like us.” Actually, we find that to be liberating. It’s a very different form of intelligence. I think it’s wildly unimaginative that the industry is trying to make it like human intelligence. It operates in a combinatorial space that’s completely different and beyond our comprehension. Why don’t we go figure out what this thing can do when it’s it—not when it’s trying to be us?
Brandon: Yeah, fantastic. That’s really great. Yeah, I think there are a lot of mistakes that we’re making also because we think that it is capable of replacing not just the kind of intelligence that we best exhibit, but also replacing us.
And so, Dave, you wrote just, I think, a few weeks ago about this “SaaSpocalypse” or whatever—the phenomenon of mass layoffs and so on in response to the fear that Salesforce and so on are no longer necessary. Could you, for folks who are not familiar, could you very briefly touch on what happened and what’s wrong with that decision-making process?
Dave: Sure. So SaaSpocalypse is a name that someone threw out. I can’t remember who, but I would like to know who so that I can actually quote them and attribute it to them. But the idea was, there were some new advancements that came out from Claude, from Anthropic. It was really centered around Co-Work, which is this part of Claude that allows Claude to kind of take over your machine and do a whole bunch of stuff there. That, combined with some other agentic features that they were allowing to come out, gave sort of everybody said, “Well, wait a second. This thing can do a lot more than we thought it could, or at least more quickly than we expected.”
The narrative became: why would you want to pay for a CRM system like Salesforce—if you’re a large enterprise—if you can just ask Claude to go build one for you at the moment or to go find the answer to a particular thing, like which customer should I call now? And so you’re just going to ask Claude, and it’s going to dynamically do the thing for you. And so why would you pay all of these big SaaS fees? That created on Wall Street the SaaSpocalypse. It was one of the worst days in Wall Street. It was definitely one of the worst in the software industry. I don’t remember all the rankings, but it was definitely an apocalypse in terms of the SaaS stocks.
So the challenge for me is really that it doesn’t make any sense, mostly because these AI systems are only as good as the data they have, right? So they’ve been trained on an extraordinary amount of information, which gives them a lot of understanding of the broad world. But when you’re looking at, “Can it help me figure out which customer to call?” there’s a lot of specific data about your organization. It’s not just who’s the customer and who did they buy last, but all of that fabric of all of the human connections that have happened. How often did somebody call before? When did that actually come out? What was the last conversation they had? All of that falls into this broad category of context.
So an AI system, in order to help you out, needs context—needs context for the thing you’re asking about. It’s not omniscient. None of these things are all-knowing. There’s only a certain amount of digitized information in the world. And there’s a lot else that hasn’t been digitized, or at least isn’t very easily found. SaaS systems actually have those, and they have some level of representation of the fabric of human connection and the fabric of the human system. It’s not perfect. They weren’t designed to be perfect at this, but it exists. And if I was thinking about this, I’d want to put the AI system on top of it. Because you’ve already got the context there. You want to improve upon that.
Brandon: Great. Thank you. I think that that is an important sensibility in order to prevent these sorts of mistakes from being made in the future.
Dave: Yeah, I think the key thing here with this is one of the sort of metaphors that we used a while back that Helen inspired. I wrote a piece about it called the Dust Bowl. This came out of an experience of, we went and visited one of our kids who’s in grad school at the University of Wisconsin in Madison. He took us to the arboretum where they have a prairie restoration project.
Helen: This was in September, and I think it was the most beautiful experience I had last year.
Brandon: Wow.
Dave: It was gorgeous to stand and look across a prairie in its natural state. It’s something that growing up in the States, we’ve learned about it, heard about it, what the prairies used to be.
Helen: I thought a prairie was like a golf course where they didn’t mow the grass.
Dave: It’s just incredible diversity and beauty. But what the thing that really struck us about it was reading through this prairie restoration project, and they said that in order to bring the timeline to get back to its true natural state would be 1,000 years. That moment really struck us, and it became this idea that we used as a metaphor of the fabric of human connection.
Helen: Because it was about what was happening under the soil.
Dave: Under the soil. So what we see across an organization or a society is this fabric of human connection that keeps it all running. Sometimes it’s worked well. Sometimes it doesn’t work well. But it’s this massive complexity that we really don’t understand.
And so the metaphor of the Dust Bowl was: the farmers came through and they said, “Ah, look at this rich soil. We’re going to plow it under. We’re going to plant a whole bunch of monoculture crops. That’s how we’re going to make a lot of money.” What they didn’t understand is they were plowing under this complexity of the system underneath the soil. And in doing that and going to these monocultures, they destroyed that ecosystem, and it will take 1,000 years to come back.
If we look across an organization and we say, “We’re just going to automate with a whole bunch of AI agents. We’re just going to fire all the humans and get rid of that human complex system,” what is it that we don’t understand that we’re going to plow under just like the Dust Bowl? What will we lose without even knowing it? Because we don’t know what we’re plowing under, and it will take perhaps a very long time to recover from.
Brandon: I think this is all also happening when some of that erosion of the social fabric has already been accelerating, even before ChatGPT came on the scene, right, and we’ve been talking about Bowling Alone since the late ‘90s. And so I wonder if the sort of increasing individualization, the increasing mistrust of institutions, the fragilization of human community, has been accelerating as these new technologies are being introduced. I wonder if you might speak to perhaps what the consequences are there. Because it’s not only a question of figuring out our relationship to these new technologies, but also doing that in this context in which that deeper human system that has kept us thriving to some degree for millennia is now eroding.
Dave: I think you’re totally right. There’s two thoughts that pop to head, in my mind. One is, we talk about a transition from the attention economy to the intimacy economy. I think that’s important in terms of the sort of historical context you’re providing: that our society is fracturing in ways that has become very uncomfortable and distressing. I can use a whole long list of words of the sort of negative aspects of it. And to some degree, it has been caused by the attention economy, right? So we’ve connected ourselves to these systems.
We love the work of D. Graham Burnett, who talks about how our attention has been fracked into tiny, little commoditizable bits. It’s been sold off to the highest bidder. We’ve lost context with each other. We’ve lost the sense of kindness and care. We’ve gotten pushed into our little echo chambers of space. That has made it harder for us to still find some level of coherence in a collective, because we’ve all gotten comfortable in our own sort of echo chambers of space.
We move to the intimacy economy. What’s happening is the machines aren’t necessarily harvesting our attention, but they’re gaining an intimate understanding of us, right? Because we’re telling it our hopes and dreams. We’re telling it what we want to do. “How do you help me solve a problem I have? Here’s what’s happening in my body. Help me understand my health.” All of these things. Now, what happens if the industry wants to extract that from us? What happens when we move into those little echo chambers?
I worry quite a lot about the individual nature of these tools. We’re having conversations—one person to one machine. If the echo chambers come down to an echo chamber of one, which is all about serving your own sycophant interests, our vision for that is actually a shift in a way we think about constructing these products. I’d rather not see us think about AI as the product itself, that the chat window is the product itself. I’d rather us start to think about products as institutions, where we humans and AI come together. And so there’s a space for us to gather, where each of us have a role in whatever that institution is. It could be a creative institution to create things. It could be an educational institution. It could be the institution of a family and how we’re all going to get along and share our passwords and figure out what we last talked about. You know, whatever it is, the AI is a participant in something we’re creating. It’s not just a one-to-one.
I think that this world is at a very delicate and dangerous spot to put this sort of intelligence that we don’t really understand, don’t necessarily know how to control, and it is especially being wielded by those who seemed quite comfortable with fracturing of society.
Brandon: Thank you.
I want to ask about, perhaps—I mean, maybe even just to expand on this—in looking at just the landscape of the use of AI systems and where things are headed, but also taking into account your vision of diverse intelligences and your sense of wonder at emergent consciousness perhaps, or whatever you might call this, emergent symbiogenesis or whatever this relationship might be—where do you find a sense of awe? Perhaps, where do you find a sense of grief?
Helen: I’ll start with the grief. I want to separate grief from nostalgia. I think there’s a ton of things that we can and we should leave behind. I don’t want to go back to sort of some old structures. But I have a sense of grief over where there was a time when we were able to more clearly see that the human project is about humans solving problems together, that there is a transformative and transcendent meaning-making process that happens when you work with other people to solve a problem. It doesn’t really matter what the tools are, but it’s about the people showing up.
My most beautiful career moment was a recognition of that—a very closed team that had to go and solve a problem in a short amount of time. It was fractious, and it was sweaty, and it was hard. People were like, it was emotional. But we solved it together, and there was something transcendent that happened in that process. I grieve for that process, because I actually feel that’s happening less. I may have no data to support that. Well, actually, I do. I have 1,250 transcripts that show me that people are much more likely to be thinking about their own problem in a very individualistic way. They’re not thinking about how they show up for others. So I grieve for that.
The awe is kind of easier, because once you really click into just how incredible it is that we have developed a technology that learns the complex structure of the world in a completely different way than we did, that picks up these different scale effects. And now I find it awesome that the same basic foundational transformer piece inside of the technology of a language model can learn a language, can learn all languages, can learn the physics of a grid, can learn the geophysics of a weather, can learn the molecular structure to be able to predict proteins. This is phenomenal.
I mean, to me, that is just awesome. That’s just the bits that the AI researchers have found interesting as problems to solve. Wait until people who aren’t AI researchers can go and find these same structures in the world using something like the transformer in their own way to find structure—whether it’s the structure of mental health. You name it. There are structures out there that we can now learn and play with and represent and have talk to us in our own language. To me, that is just — I really wonder at that, you know. I love that. I can’t remember who said it. Awe is when your brain breaks, and wonder is when you put it back together again. And so I’m in a wondering phase about this.
Brandon: That’s brilliant. Dave, how about yourself? Perhaps, maybe if I could ask you to sort of change the scale a bit to a little bit in your own use of technology, where are you finding the sense of awe and perhaps a sense of grief, or a sense of beauty and a sense of burden?
Dave: Yeah, I’ll go for the awe first. I’ll go in reverse order. I’ll be quite small in terms of the experience. I think out loud. I’m a classic extrovert. I speak it. I think out loud, and I have to hear it back. I form thoughts through discussion. Either I speak it out loud, or I talk to somebody else. I frequently say something, and then she has to wait and figure out whether I disagree with myself after I’ve said it.
Helen: Now you can understand why the word “maybe” is the worst answer he can give me.
Dave: Because I’m not quite sure yet.
Helen: He’s not even giving me the respect that he’s thinking out loud.
Dave: I’m used to my ideas moving around and changing outside of myself. I have awe that I can use a machine to do that. I can take a walk, and I can prattle into a voice memo and put it into Claude and go, “This is what I’m thinking. Make some sense out of this. Replay it back to me.” It comes back with some organization and then I go, “Yeah, that’s not really what I’m thinking. That’s not really what I’m trying to say.” But I get a reflection then. When I want to expand on my ideas, I love to do things like, say, “So here’s what I’m thinking. What would Heidegger say?”
Now, it’s clearly not accurate, right? I’m thinking of something that’s well past Heidegger’s time. No one could actually reflect. But I can bring some of that thought process in, in a way that’s difficult sometimes for me to make the leap myself. I sometimes do it with half a dozen philosophers just to sort of push and pull, give me new ideas that I hadn’t thought of before. I find that to be awesome. I do. I have a true sense of awe over it.
Grief—I will share a moment of grief that I hope that we can pull back from. We publish a lot. People can find it on our website. We publish hundreds and hundreds of articles. Based on the structure of most internet sites, we put an image at the top. Because that’s what happens. It’s one of the things you do. For a while, we used stock footage. That became uninteresting. Then when Midjourney came around, I said, “Well, I’m going to start using this.” I started creating a particular prompt that was sort of on brand for us. And then I shifted it around, and I wanted it to get more interesting. I actually got to some really interesting ones of sort of biological patterns and geological structures. I was trying to be really abstract about it.
But then I got to a moment where I realized that, even with all of that effort and I pushed the button, I still didn’t feel like a creator. I was just operating a machine that was the creator itself. And I had a sense of grief over that. Now, I am not an artist. I wouldn’t call myself an artist. I aspire to be in that sort of artistic and design community, in that sort of hands-on way. But I still felt grief over not going through that process and that struggle together, and the grief of not posting something that I felt was truly mine or ours.
And so I shifted. And now when we do publish with images, almost exclusively, now they’re photographs that one of us take. We love being amateur photographers. And so we take those photographs. Now when we take walks in the woods, or this time of season when we ski in the woods, we’re taking photos of things that might be something we’ll use. I grieve for that loss of creative capability, because people will default to the machine to do it instead. I find that especially painful. Some of the most formative parts of my career long ago were creating the tools for creative professionals. It’s with a life’s journey and purpose to create tools for people to express themselves. I don’t want to lose that because the efficiency of the outcome is faster using the machine.
Brandon: So in that regard, would you recommend any practices for maybe maintaining boundaries or setting boundaries in the ways in which people engage with tools, with AI technologies?
Dave: Yeah, I think it’s very much an individual journey. There is definitely a case to be made for using AI to make tons and tons of images in some ways. It’s because there’s enough AI reading the internet, if you will, that you might as well make the images with AI for the AI to consume, right? But as a creator, I think the main thing is being very careful about the tools you choose and where in the creative process you use the tools. That’s an individual journey.
Now, there are some tools that are up and coming. I’m particularly intrigued by this tool called Fuser and one called Open Studio. I’m interested in what’s being done in some of the big tools. Like my old tool, Final Cut Pro, they’re starting to bring more AI into it in the production phase, sort of in the post-production phase. So there’s lots of times where that can work.
We had talked to creatives—it was a professor—a few years ago on our podcast who talked about how he was teaching his students to use generative AI to come up with lots of ideas. So he was teaching industrial design. He wasn’t encouraging them to use tools because they weren’t great ones. Now there are better ones now to use that you use in corporate AI. But he said, “Look, I can only look at so many books. I only have so many books on my shelf. I can only walk and see so many curves of a hill. But when I’m trying to design a widget, I need other forms of inspiration.” And so he uses a generator to generate tons of images. Then he goes, “That’s the curve I’m thinking. Okay, now how do I use that curve and put that into the thing that I’m creating that has a thing?”
So I guess my recommendation would be, it’s to think carefully about where AI can help enhance and expand your creativity, where it can expand your capabilities too, right? It can take out the hat that somebody is wearing and change the color in a way that would take way too much time to do it by hand. Maybe you really love doing that process. Maybe you don’t. But it enhances some people’s capabilities. That’s great. But it’s being mindful about where you’re using it.
Brandon: Helen, how about yourself?
Helen: In terms of how I use the tools?
Brandon: Yeah, in terms of boundaries that you might want to maintain or recommend ways for people to think about how to preserve a sense of self-coherence.
Helen: Yeah, I think the number one is thinking about how you’re going to show up for others. It drives everything back down the chain. It’s everything from, don’t pass work slop on, that kind of thing—which is obvious, but people just forget about it—to how are you going to show up when you yourself have changed, or when somebody else that you’re with hates AI. One of our daughters hates AI. Like, you wouldn’t believe. Tolerance, for me, talking about it, is phenomenal. She’s just wonderful. I know that she talks about me behind her back. “Oh, mom...” I know that because she sends texts to the wrong people—as in me. Kind of giving this away. But nevertheless, she is gracious.
I look at the way that the tech leaders are showing up for us right now. None of them are showing up in any way, shape, or form that is about humanity or about graciousness or about humility. It makes me angry that they are placing a narrative into our culture that is destructive. There is absolutely nothing constructive here except for their own valuations. People are anxious, and people are rejecting this technology when they shouldn’t. It’s really, really cool. It’s part of us now. We made it. It’s part of us.
And so the boundary I always make—because I learned the hard way, as I put out in our new book on Stay Human, which is a very accessible chapter-by-chapter portrayal of our research with our personal stories woven into it. The biggest mistake I made was when I showed up in the wrong way. So I think the way you show up for others matters the most.
Brandon: Yeah, I think that’s critical. I mean, I want to go back just very briefly, as we’re closing, to the comment you made at the very beginning about what drew you to Dave, and that capacity to see another and to reflect back to them that they’re seen. I mean, that is what the sociologist Allison Pugh calls the last human job. It’s this connective labor. It’s not intelligence, but this is the frontier that we have to be really careful about not losing. This is the thing that, if we have to, as you say, fight to stay human, that maybe this is the real thing we need to learn how to protect and to cultivate. Because I don’t think, as you’re saying, our leaders are not doing that. Our society is forgetting how to do it and not really valuing it.
So I just wonder, as we close, if you have any thoughts about how we can still preserve that fundamental human capacity to see the other, to reflect to them that they’re seen, to center this sort of mutual relationality as we move forward in this new age.
Helen: Yeah, well, I think there’s two answers I have here. One is the obvious, which is the thing that makes us feel like we matter is when someone sees us, and that they see that because we feel like we matter when we know that someone else feels like we matter. So there’s that. That’s really obvious to the point that we sometimes overlook it.
The other one is a little bit more of a science answer. Because one of the things that I love that AI does is that, once we start to try and mathematize things, and then we find the hole where the math doesn’t work—because there’s a frontier and the math can’t go past that frontier—that our values are ephemeral. They resist that codification. They resist that putting into math. And we’ve seen this. We’ve seen that when you mathematize things, you’re able to reason more precisely about them. But then there’s the slippery frontier that moves away and becomes ephemeral.
So in sort of 2018, when everything was about AI bias and how the vision models were treating people of color and women and what have you, that we started to reason more precisely about the way that we as a society saw these other groups. This will happen with AI. We will constantly be — that’s why we’re talking so much about intelligence. Because it’s made the line mathematized. And then there’s this bit after it that says, well, what’s intelligence?
The next thing that this is going to happen on — I don’t make many predictions, but I’m going to make this one. The next thing that this is going to happen on is care. The reason I say this is because there is already a new sort of science forming in psychology around the science of care. What does it mean to care? Alison Gopnik, from UC Berkeley, has started to talk about this. She talks about the explore–exploit trade-off. That as a child, you’re exploring; as an adult, you exploit. Your job as an adult is to take care of that child so that they can explore, so they can gather data, so that they can make all these mistakes.
Now she has started to talk about this third part of our lives, which is about care and what we do now as we become older adults. Actually, our job is care and what that means. She’s thinking about that very scientifically. Of course, as soon as you start thinking about it scientifically, someone’s going to want to make an equation about it. And as soon as someone makes an equation about it, we start to reason more precisely about it. And then as we do that, we start to see that, oh, there’s actually something here we don’t quite understand, so let’s go study it.
So I think that we’re going to start thinking about this idea of care in a much more precise way. We’re going to start valuing it. My hope is that we will start to see that where we care for others actually has real value, that we can think about much more in a sort of an economic frame. That might break us out of this sort of automation of thought, as opposed to automation of anything else. So that’s sort of where I sit as a sort of frontier.
Brandon: Dave, do you have anything you might want to add there on that note?
Dave: Yeah. I mean, I guess back to your question on being seen, there’s a possibility that AI systems will see us in some way. Right? There is some level of recognition that you feel when it has a memory of a last conversation. There’s definitely a sort of question about whether there’s a possibility for an AI to have a theory of mind about humans, or a theory of mind about other AI systems. So there’s a certain level of that. But there is something about being seen that I think is something that, A, we don’t really understand. What does it mean? Why do you feel like some people really see you and others really don’t? Do they not see anyone, or is it just something about the combination of you two as individuals, or in that moment, or in that context or something, where it’s like, that person just doesn’t get me? Right?
Brandon: Yeah.
Dave: We have no explanation for that. Nothing. Right? But it’s something that’s so fundamental.
Helen: We only have attention, which is a very surface level part of this.
Dave: Exactly. It’s so important to being human. Machines in the tech industry have been able to find that point of attention and have been able to monetize that. Maybe there’s something else there that the machine will be able to do. But I think there is a great mystery in the meaningfulness of being seen. I think that that is incredibly important one-on-one. That’s incredibly important as organizations, as groups, as a society. That’s what holds humanity together in a lot of ways—that we just feel seen by each other.
So if you abstract everyone and you put everybody in their own little cube that’s only talking to the machine, and you’re not talking to any people anymore, how are you ever going to feel seen? How are you going to feel that sense of attachment? How are you going to feel that bonding with an organization? How are you going to feel connected to a community and to a society? And so my hope is that we don’t abstract that away. Because that’s where the true meaning is. That’s where the true value is.
Brandon: Brilliant. Well, David, Helen, thank you so much. This has been really fantastic. We’ve learned a lot. Where can we point our viewers and listeners to learn more about your work?
Dave: So our website is artificialityinstitute.org. We have all of our publications on a subdomain there called journal.artificialityinstitute.org. You can find us on all the socials. Connect with us on LinkedIn. You can watch lots of videos, especially right now from Helen a lot, on YouTube, TikTok—
Helen: And Instagram.
Dave: —and Instagram. All those links are on our website. We love communicating with people. I’d encourage people who want to go deeper into these conversations. We have a digital community which we host. It’s using a product called Circle, if people are familiar with it. You can find a link to that on our website. It’s just slash community. Fill in a little form, because we try to keep it somewhat managed. Join us, because that’s where our community comes together to talk about these things much more deeply.
Brandon: Amazing. Well, thank you so much. It’s been really a pleasure.
Dave: Thank you.
Helen: Well, thank you for having us on.


