Preserving Human Connection in an Age of AI
A conversation with Allison Pugh and Louis Kim
We’ve entered a strange moment in which the most basic questions about being human are being reopened—not just by philosophers whose profession it is to care about these things, but by the technologies that increasingly mediate our relationships. As AI and automation spread, we’re finding ourselves asking not just what activities can be outsourced, but what must never be.
That’s what I wanted to explore in this episode of Beauty at Work with two guests who come at the problem from strikingly different angles.
Dr. Allison Pugh is a Professor of Sociology at Johns Hopkins and the author of The Last Human Job, an award-winning book that names and defends what she calls “connective labor”: the profoundly human work of “seeing the other” and reflecting that seeing back. This form of work generates dignity and belonging, and is increasingly threatened by pressures toward scripting, quantification, and efficiency.
Louis Kim spent decades leading innovation at Hewlett-Packard, most recently as VP of AI, before resigning to pursue an M.Div. at Duke to work in hospice and palliative care. His story is shaped by death and accompaniment—standing for hours at his father’s open casket funeral, hearing “a thousand versions of condolences,” and learning what presence can (and cannot) do. He also participates in Vatican-linked conversations about AI and healthcare, asking what forms of human encounter must remain non-negotiable.
In our conversation, Allison distinguishes connective labor from the better-known idea of “emotional labor.” Emotional labor is what happens when you manage feeling for a wage. Connective labor is different: it’s a “two-way street,” a mutual moment in which someone is seen and feels seen. It’s also everywhere: therapists and chaplains, but also baristas and dry cleaners and managers.
And yet, the pressures bearing down on this kind of work are real. Louis describes what it means to deliver layoffs in large organizations: scripts, euphemisms, procedural safeguards, and then the slow realization that what matters most is the ability to go “off script,” to be present without defensiveness, to honor dignity with a kind of nonverbal readiness. Systematization in such contexts is “an artifact of scale,” Louis argues, and depersonalization is already baked into modern life. What do we do about it, and what does it take to not become a perpetrator inside the system?
Allison identifies three drivers behind the wider push for scripting and quantification: systems-management thinking that wants to control the uncontrollable; institutional self-protection through performative box-checking; and the desire (sometimes understandable) to reduce the “chaos” of human unpredictability. These forces make the adoption of AI seem “inevitable” — but Allison pushes back against inevitability talk. Sometimes adoption is fast not because it’s destiny, but because people don’t really get to choose. And that’s why agency, regulation, and culture matter.
This episode is an attempt to name what is most fragile in us—our capacity to see and be seen—and to ask what kinds of institutional and technological futures protect that capacity rather than replace it because of its messiness and inefficiency.
You can listen to our conversation in two parts (here and here), or watch the video of our full conversation or read an unedited transcript below.
Brandon: Hey, Allison. Hey, Louis. Thanks so much for joining us on the podcast. It’s really such a pleasure to have you with us.
Louis: Thanks, Brandon. Hi, Allison.
Allison: Thanks, Brandon. Hi, Louis.
Brandon: Great. Well, I thought this would be a really wonderful occasion to have a conversation on the relationship between technology and connection. This season of the podcast is focused on the beauty and burdens of innovation. We’re looking at a lot of innovations that are happening in relation to how we connect with each other, how we relate to each other. So that’s one of the reasons I thought of bringing you both together into this conversation.
But before we go there, I want to ask you, as I do with all my podcast guests, to share a memory of a profound beauty from your childhoods or your early lives that remains with you till today. It doesn’t have to do with connection or relationship. It could. But any memory that comes to your minds? Perhaps, Louis, I’ll start with you, and then Allison.
Louis: Photography. So my dad gave me a camera, I guess, in third grade. Then later, I read books in photography. There are still images I remember from those books. I’m a photographer to this day.
Brandon: Do you have a particular image that stays with you or a memory of taking a particular kind of photograph?
Louis: Yeah, there’s a particular image in one of the books that was very geometrical. The lesson of the book was that we stopped seeing as adults. As children, we just delight. We don’t attach labels and words to things. It starts as children. And as you grow older, you lose that ability. So it was just a very simple thing, and still a lesson about seeing deeply.
Brandon: Wow. Great. Allison?
Allison: Yeah, I’ve been thinking about this. I grew up in New York City, in a very large Catholic family. So the beauty, when you said beauty, actually, there was the ocean or that ballet class or something. But really, I was thinking about the kind of boisterous, kind of climate of our family dinners that were very ritualistic every night, very long. But that culminated in the kids all doing the dishes together to loud 1970’s music, that I still remember and cherish. So I think that it was a kind of, I want to say relational beauty, or I would say kind of climate of beauty, of relating to each other and doing this thing together. Yeah.
Brandon: Did you all get along at the time? When you say boisterous, was it just the volume, or were there tensions? I mean, I asked because I’ve got six kids. So it’s not a pleasant—
Allison: Oh, you do?
Brandon: Yeah, yeah. So, yeah, the constant bickering, the name calling, you know?
Allison: Yeah, I mean, there’s plenty of stories. I remember there’s one famous story of my oldest brother. He threw a knife at my oldest sister when there was a babysitter. I can remember them. My next oldest sister and I used to have long fights, that we would write long letters to each other about why the other person was wrong. So there’s plenty of conflict, but also a lot of good times—camping, various things.
Brandon: Well, I’m glad your memory of that environment is beautiful. It’s something I hope for my children. But presently, it’s not something that could be, yeah.
Allison: Yeah, I mean, I still love doing the dishes with my siblings.
Brandon: Wow.
Allison: I did that with my own kids. We had a thing, where the person who’s in charge of the pans gets to pick the music. We had this whole culture that I 100% was just copying my own parents—who were nowhere to be found. Like, as soon as there was dishes going on, they were somewhere else.
Brandon: That’s right. Yeah, that’s what we do as well. That’s amazing. I mean, the focus of our conversation is this brilliant book, Allison, The Last Human Job, on this concept of connective labor. Tell us how you got here. Because your trajectory is pretty interesting. You started as a journalist and then became a sociologist. Then now you’re exploring this particular modality of connection and why it’s at risk. Could you say a little bit about your trajectory, and what led you to this book?
Allison: Yeah, I mean, on the one hand, I think this book was the dissertation I should have written—even though it took me 20 years to get there. Because it’s what I really deeply care about. The kind of proximate cause or the kind of immediate thing that led me there was actually a fight that I was embroiled in sociology about the value of in-depth interviewing. So when I do an in-depth interview, which is how I do my research, it often involves kind of just sitting and trying to elicit the other person’s truth through a kind of careful reflection of what I’m hearing, even if it’s not what they’re saying. So it’s like if I’m sensing some ambivalence, or unhappiness, or something underneath what they’re saying, I might name that thing. Then it actually opens them up. It opens up the experience. I do think it is like a kind of profound seeing. It does affect me just as much as it affects them. It’s exhausting, but very rewarding also.
After I was embroiled in this intellectual conversation about what is the value of in-depth interviewing, and are we just getting people’s rationalizations after the fact, I was like, “No, no, this is a valuable experience.” Then I was like, why is it valuable? Also, how do I teach it to others? How do I make it more systematic? How do I kind of scale it up as the, I don’t know, Silicon Valley people would call it? So all those questions really led me to thinking more deeply about what kind of work that is. So thinking about seeing it kind of everywhere—seeing it in the hairdresser, seeing it, of course, in the therapist, but also in your kid’s soccer coach, just in your everyday life, all over the place. So in that, I basically embarked on a journey of discovery, of like, oh, look, underneath all these wildly disparate occupations, people are kind of doing this same thing, this seeing of the other and kind of, I don’t know, co-producing this truth between people. I started to think not just about that, but how do we systematize it? How do we scale it up? Is it possible? Do you ruin it? How far can we go down that path? But you have to be able to teach it to others. It’s not something that you can just automatically assume someone is going to be born doing well. Anyway, those kind of tensions and questions were what fed me, what sent me, on this path.
Brandon: I really appreciate that you’ve named it as this same sort of process that’s happening for therapists, teachers, and chaplains and a lot of other contexts too. I think you say it’s not necessarily always a positive thing, that it could be perhaps manipulative, right? And so it’s not always just an authentic seeing of the other for the sake of the other. So, yeah, the concept of connective labor, I think, is really generative for us to sort of explore across these domains.
Louis, can I ask you to share perhaps your own trajectory and what drew you to — I think it’s pretty rare to have a corporate career in a company for as long as you have at Hewlett-Packard. I studied corporate professionals for a number of years, and would find people switching from firm to firm very quickly. That was almost an expectation that you would not really stick to one environment. There’s a sense of almost stagnation if people stay in one place too long. And so I’m curious as to what led you, first of all, to this field, to working in tech, what your experience was like there. What’s led you now to switch to something very different in terms of committing your life to do chaplaincy or hospice care, that sort of very connective modality of valuing human dignity?
Louis: I’ll keep the the HP part shorter. I think it’s probably the less interesting. But the quick answer for the longevity is I had a lot of different jobs. A good part of the tenure was leading teams with businesses that I had created. So it’s hard to leave. I’ve been in three different cities, including international posting. So it was very different cultures and very different companies in some ways.
As you referred to, I resigned from HP in August. I’m currently enrolled at Duke Divinity in their M.Div. program. That decision was a culmination of four or five years of discernment when I got exposed to end of life and palliative. I started volunteering for hospice about two or three years ago. Of course, the M.Div. sets you up for chaplaincy. So I’ll be done in about three to four years. The more chaplains I meet, the more I’m not as confident that I may be cut out for it. But it’s something that’s still drawing me as a calling. I apologize, by the way, if you hear a bell ringing. It’s our well-trained dog trying to get out.
Brandon: Yeah, yeah, no worries. Could you say a bit about what drew you to this field? Were there any people, any moments, that sort of influenced you? Because it really is such a stark pivot for somebody in the corporate world to consider, and I’m wondering what might have been some pivotal influences for you.
Louis: My parents died when I was relatively young. So that was a formative experience in terms of dealing with death. Then about three or four years ago, I met Lydia Dugdale—I think you both know—who wrote The Lost Art of Dying. So that book exposed me to the systemic issues around end of life and dying. Then I went through some hospice experiences with relatives, including being at the final breath of a very close relative. I just felt very grounded in those experiences. The cause itself just felt large, unaddressed relative to other things I had been exposed to. The choice inevitably, it just felt inevitable at some point. I guess that’s the definition of a calling.
Maybe one last thing that was a formative experience is: my father was killed in a car accident. He was a parish priest. He became a priest after my mom died. And so, at his funeral, his entire parish showed up. So it was over 1,200 people. At the open casket ceremony, the day before the funeral, my sister and I stood for four hours greeting the well wishers. And so I heard about 1,000 versions of condolences standing next to my dad’s casket for four hours. So that shaped a lot of thoughts about accompaniment, what comments and gestures were helpful, and which ones maybe not so helpful, even though people mean well. And so that was a very formative experience have shaped how I view accompaniment.
By the way, I’ve read Allison’s book, so there’s a lot of questions I have for her. One other thing I’ll just mention is, last month, I’ve been in Rome twice for some AI theology conferences on healthcare. The topic that we’ve ended up converging in with regards to Catholic theology is: what are the final roles with a human in the world of advancing AI? What are some of the criteria? Some of the things that we conversed on are rolled out with some of the things in Allison’s book.
Brandon: Well, I definitely want to ask you about that and about your role at the Builders AI Forum and what’s happening at the Vatican around these issues. Perhaps, Allison, could I ask you maybe to say a bit more about this concept of connective labor and its relation to this other term, that maybe people might be a bit more familiar with, emotional labor? There’s some sort of relationship, but it’s not quite the same thing. What, in particular, connective labor has to teach us about dignity, about belonging? What is its relationship to those terms?
Allison: Yeah, thank you. Thanks, Louis, for even reading it. I’m always gratified to hear about that.
So connective labor. In sociology and academia, there are a lot of terms for kind of emotional type work of all kinds. There’s the term affective labor. There’s all sorts—emotional labor, emotional quotient, emotional intelligence, et cetera. I thought that emotional labor is the big kahuna. It’s the thing that maybe started all of this, written by Arlie Hochschild. Her first articles were like 1977 and then she came out with her very famous book, The Managed Heart, in 1983. That captured the notion of when you have to control your emotions for a wage. It was a powerful contribution, because it was a way of capturing what felt different in service work compared to manufacturing. So in service work, she famously studied flight attendants. These are people who have to smile even when they don’t want to. Using ethnography and extensive interviews, she kind of documented how alienated these flight attendants became from their very selves, and how kind of corrosive that was for their well-being. Of course, this offers beautiful analogy to service work of all kinds—where you’re controlling how you might really feel, authentically feel, because you’re being paid to do so. You’re controlling that so that the other person has a good experience, feels good.
Well, connective labor is pretty different from that, even though it involves emotional labor, controlling your emotions for sure. Connective labor, as I define it, it really is about that reflecting the other process, that seeing the other, the bearing witness to the other person. It’s recognizing, acknowledging. I’m using all these words that many people have used in different ways. The other thing I want to say that is really important for me in this definition is that it’s a two-way street. It’s a kind of mutual moment of together. You are seeing the other, and the other person feels seen. And if they don’t feel seen, then it’s actually not a successful moment of connective labor. So it’s a two-way street, and it’s really powerfully about this seeing. The reason why there’s a little emotional labor in it is because you may feel — say, you’re a therapist at the VA, which I’ve spoken to many of them. They have a client who is suffering PTSD and has a lot of rage. They may not feel all sorts of warmth and affection towards all this rage that’s coming at them, but they are still engaged in a seeing project that offers some form of dignity to that other person, regardless of all the difficult emotion and problematic persona that that person is for them. So that kind of mutual process, that’s connective labor.
Now, the reason why it’s a little complicated—or probably depending on where your listeners are from, are they from academia? Are they just out there in the real world—it might be a little complicated because since the word emotional labor is such a felicitous word, people have, since that writing, since 1983, really applied it to everything that involves emotions in the workplace. I’m actually pretty open to it. So if we want to call emotional labor that big, huge umbrella thing, where it’s just like anytime there’s emotions in the workplace, that’s what I’m doing, I don’t mind saying connective labor is like a version, like under the umbrella, one kind of emotional labor that is done. But I do want to separate it from that kind of emotion management perspective, which was so beautifully captured originally in the 1983. So that’s why it’s a little complicated. Because the term itself has moved, and also it kind of is different depending on whether you’re talking to a regular person who reads the Atlantic or whatever and an academic person.
Brandon: I wonder if this is accurate or not, but it strikes me that one of the key differences might be that perhaps maybe a more performative element to emotional labor and maybe less of a mutuality, right? It seems like what you’re talking about is closer to Hartmut Rosa’s notion of resonance, where there is something that affects you and then you were affected by it. That isn’t a mutual interaction, which Rosa argues that you can’t really kind of make it happen in some way. It either happens or it doesn’t. You can certainly try. It seems like that’s what a good therapist is trying to do, is trying to make that connection happen. Even like as a parent trying to understand my child, sometimes it just doesn’t happen. I try to say that I’m hearing this, and they’re like, “No, that’s not what I’m saying.” It just fails. It’s not that mutual interaction. So I wonder if that mutuality then becomes really critical. But if so, it does seem like, as you say, sort of magic. There’s a thing we can control and, I think, we can’t control.
Allison: That’s interesting. Also, I do want to say one thing about the belonging and dignity part, which was part of your original question. Because as an academic, I get all excited about parsing with the definitions.
Brandon: Sure, sure.
Allison: But really, the kind of crux of your question was about, like, what is the role this has in belonging and dignity? That’s the kind of end product or the result of doing connective labor well. The other thing is, as I mentioned at the outset, it’s all around us. It’s all over. It’s your barista. It’s your dry cleaner. There’s all kinds of mundane, low-level commerce, retail, or whatever interactions that can involve a momentary jewel of connecting, of seeing the other. Those kinds of things build our little units of belonging that I think are very powerful for knitting us together as a community. I have increasingly come to think that this is the heart of everyday experience that we need to preserve. And so I’m not just talking about these deeply meaningful relationships of the therapist, or the teacher, or the chaplain in the hospital or whatever. So I’ll stop there because I’m talking too much.
Brandon: No, thank you. No, that’s very helpful. Yeah, it’s not just the moments of profound encounter, but also the simple interactions with a checkout counter person, right, and why that’s valuable.
Louis, I’d love to know, in your experience of leadership in the corporate world where this kind of connective labor has been manifested, is it important for leadership? Is that something that you’ve had to, either how have you experienced it as a giver or a recipient of this mode of connection? Perhaps, what are the challenges to this kind of connective labor in the corporate environment?
Louis: Well, some examples that come to mind is when a group or a job goes away, and you have to inform either an individual or a team that they’re going to get laid off. I had to do a lot of that. There are perfunctory ways to do it. There are scripts that you can follow, which are necessary when you’re in an organization of 10, 20, 30,000 people. Over time, as I’ve gotten older, I found myself to go maybe off script, just sort of be in the moment, and avoid euphemisms. I think an important element, something that I think Allison is referring to, is there’s just a presence that is necessary. That’s part of, I guess, honoring dignity with presence. Sometimes it’s nonverbal. It’s just fully being there and ready to receive the moment even when it’s difficult. There’s almost, I don’t know, ontological. I don’t know if that’s the right word. But beyond just making the situation more useful or having a better outcome, there’s just the fact that we’re human and deserving of dignity requires a certain sort of state of being versus what you need to say to accomplish a certain task. So I don’t know if the word is ontology, but it really transcends what you’re trying to accomplish. So my corporate experience, I would say layoffs kind of come closest to what I experienced volunteering for hospice side. Of course, it’s very, very different, but emotionally, very, very charged moments.
Brandon: I suppose it is a kind of death in some sense that you’re dealing with, right, in that case?
Louis: Yeah.
Brandon: So I want to ask both of you about one of the sources of pressure that, Allison, you mentioned in your book, which is the increasing pressure for scripting, checklists, and quantifications. So you’ve got these sort of twin pressures. One is to collect as much data as possible on all aspects of our human life, and then to have all of the scripts and checklists. Louis, you were mentioning, again, maybe there’s scripts that are put into place so that people can manage those kinds of interactions more productively. And so this pressure to standardize, it seems not just restricted to the corporate world. It’s also, Allison, you opened your book talking about chaplains and how they’re facing the need to see prayer as a resource and family as a resource. Everything kind of becomes this sort of Heideggerian in framing, right? Everything is turned into kind of means to an end. I wonder. Perhaps maybe, Louis, I’ll ask you just from your experience. You’ve just started your M.Div., but are you already seeing some of this pressure in the field of chaplaincy, in this field that you’re venturing into? Do you see perhaps similar pressures to the corporate pressure to script things, to manage things, to control things, to collect data, et cetera? I’m wondering if you’re already seeing that.
Louis: Yes, I’m actually in my third semester at Duke. I started last fall in a non-degree, two-semester certificate program in healthcare and theology. About 20 of my classmates, about 22 were doctors. So I heard their stories of the mechanization of their practice, which is one of the reasons why they enrolled in this program. I also, at HP, was involved in developing some wearables for healthcare practitioners. I can listen to conversations, record them, and transcribe them. It’s a pretty common use case, but I got to hear — we did a lot of ethnographic interviews with healthcare practitioners. As Allison cited in her book, up to three or four hours can be spent on documentation for insurance reimbursement. So, yes, I definitely see this creeping in.
Brandon: Allison, could you talk about what’s driving that sort of pressure? One of the areas I’ve seen it, in some cases, it really seems necessary. As I’ve been doing research on Catholic priests, and since the clergy sex abuse crisis has been exposed, there has to be a lot of training put into place, a lot of institutional safeguards to prevent abuse. And so the kinds of documentation, the kinds of bureaucratic procedures that help people in place are, it seems really necessary. They make it very challenging to do for Catholic priests, for instance, to now actually do any kind of effective youth ministry, people argue that it’s not possible because of the history of abuse. And so, now, with the new safeguards put into place, it really becomes very challenging to do the actual task of that kind of encounter that might be misperceived. You don’t want to get into a situation where there’s a lawsuit because you looked at somebody askance. And so there’s a fear that has crept into place that impedes the actual tasks that they’re feel called to do. But I think there’s a fear of lawsuits in a lot of fields. In medical professionals too, I think sometimes they feel the pressure to minimize some of their interactions, document more and interact less, et cetera. And so I’m just curious. If you could give us a sense of, like, what are the factors driving this sort of institutional challenges, I suppose, to connective labor? Which of these are necessary? Which of these are helpful, and which of these are perhaps might be the kind of impediment that is leading us to say that, therefore, we need AI systems to take over or something?
Allison: Thank you. I also want to say, parenthetically, Louis, I thought your description of just the sheer presence and the power of just being present with another person was just beautifully said. This really great capture of what we’re talking about. So thinking about kind of scripting, data collection, and the imperatives that drive that, I see that, to me, there’s three kind of drivers. The first is, I would say, a sheer or the kind of growing dominance of systems management thinking stemming, really, we’ll say, from the enlightenment, and just kind of how much can we make this controlled, objective process and putting that scientific objectivity starting with the factory, and moving through emotional, humane interpersonal service work like teaching or chaplaincy. That’s a kind of historic trajectory that’s everywhere people feel the impact of. Hairdressers telling me, “I only have 22 minutes.” They only give you 22 minutes even if it’s a real interaction. I have to look away from the mirror and get, you know. I only have 22 minutes.” That’s stemming straight from this desire to control the uncontrollable.
But what you were describing is, what I would say, the second kind of imperative driving here, which is institutions trying to protect themselves. That’s the kind of adoption of, we’ll just say, procedures, bureaucratic procedures—not in all cases, but can certainly feel frequently performative, where you’re kind of checking a box. Certainly, in academia, we have a lot of webinars we have to watch just to kind of say, can you effectively, I don’t know, manage a lab? You say you’re watching some webinar for half an hour, and checking a box that you had that training. It kind of has very little to do with whether or not you can actually manage a lab. Those kind of performative box checking is something that’s kind of across many bureaucracies.
But there is this kind of third thing for which I have probably the most sympathy. It kind of came out in what Louis was saying earlier, about the scripts being useful when you’re actually having to lay off people in a very large organization. Now, my sympathy is small here because I really like where he ended, which was, I try to not do that now. What scripts do and the kind of regimentation or systematization of these interactions, what that does is it controls the chaos that is the other person. And so, when you’re giving them bad news, they could respond in lots of different ways. The worker is a person also. They may be afraid of what someone else might do—including lawsuits, but also flying off the handle or whatever. What systems do is they control. This is what McDonald’s coins, McDonald’s invents or made, is the apotheosis of, you know, they control the consumer, as well as the worker. The consumer knows where to go to put with their tray and all this stuff. We are controlled, just as much as the worker is. That’s what kind of systems do.
Other people are unpredictable. That’s actually for many practitioners who I’ve spoken to about this—teachers, therapists, and doctors. That’s actually the beautiful thing. Because that’s where true collaboration comes. You can’t predict the other, so you’re kind of waiting, listening, responding to in the moment. There’s a beautiful spontaneity to that. So it’s a beautiful part of being human and being in human interaction, but it is also potentially scary and threatening certainly to institutions. So that’s why they’re putting in these different systems or scripts. But I want to just add finally that I really like what Louis did. I think actually disarming the chaos, the threatening chaos, of the other person happens when you treat them like another human being. So if you kind of honor their presence, they are much more likely to be honored by that and to be calmer.
Brandon: Yeah, thanks, Allison. Yeah, I think your book is a really important call for personalization, right? I think that word is being used now in a very different sense, which is now sort of colonized by technology in ways that—
Allison: Yeah, we need to fight that back.
Brandon: Yeah, I think you call it customization, right?
Allison: Yes.
Brandon: And so what it takes to personalize is a person. I’m curious. Maybe, perhaps, Louis, if I could ask you. I mean, in your work in the AI space, have you seen, whether some of what’s driving the development of various new AI tools is a desire to bring about the kind of control that Allison is talking about, for institutions to be able to control behavior scripts, et cetera, or even to provide a non-judgmental sort of, you know, I think about, I guess, some of Allison’s examples of clients of a physician who may have felt judged by that physician because she was intimidating, that they were obese. So is some of what’s driving the development of AI tools to be able to provide a customized judgment-free sort of interaction that can substitute for human encounter? Because it seems like there are a lot of these sort of AI antidotes to loneliness, let’s say. I’m just wondering what you’ve seen working in that space, as to what’s driving the development of some of these new technologies?
Louis: Well, it’s hard to generalize. There’s a lot of motives. I have been around companies that have chatbots. I’ve been around companies that have small robots for elder care, for people that are alone. I would say, for sure, everyone is just trying to be effective. So there’s no insidious motive to displace a human. They’re just trying to be helpful. I think where there is debate and consternation is on the themes in Allison’s book. Will it promote replacing a human prematurely? Can you really replicate a human relationship, et cetera? I think on the issue of systemizing things, I would say it’s almost an academic point, a moot point. It’s going to happen. It’s an artifact of scale. I mean, there is so much depersonalization already in our world with industrialization that we just take for granted. I mean, we don’t know where our food comes from or how our clothes are made. Within a particular organization, when you grow from 10 to 1,000 to 10,000 to whatever, you’re going to see this depersonalization. It’s just inevitable.
So I think the question then is, what to do about it? It’s not an easy answer, but I think the moments that I’ve seen where someone transcends the effect of being in a large, depersonalized system comes down to an individual moral formation. The little that I’m learning about CPE, Clinical Pastoral Education for chaplains, I would summarize it as kind of get over yourself. Some of the most toxic behaviors I’ve seen in interpersonal relationships are really just from people having their own unresolved issues, that they didn’t force it on to other people. For any individual worried about this kind of structural depersonalization issue, the question is, well, what are you going to do about it? High point be a perpetual in that kind of system. I think that quite the answer is your own formation and getting over your own issues. Sorry. Sorry about the guesting.
Allison: Someone wants to join in.
Brandon: Right, yeah. Alright. Everybody, that’s a good place to pause our conversation for now.
(outro)
In the second half, we’re going to turn directly to the role of technology, especially AI, and ask: Can AI meaningfully help with the work of human connection? When is it better than nothing? When is it even better than human? When does it erode our capacity for belonging? We’re also going to explore Louis’s work at the Vatican and what he calls the “irreducible encounter” principle, and see what Louis and Allison think we might be able to safeguard through policy decisions. See you next time.
PART TWO
(intro)
Brandon: I’m Brandon Vaidyanathan, and this is Beauty at Work—the podcast that seeks to expand our understanding of beauty: what it is, how it works, and why it matters for the work we do. This season of the podcast is sponsored by Templeton Religion Trust and is focused on the beauty and burdens of innovation.
Hey, everybody. Welcome to Beauty at Work. This is the second half of my conversation with Allison Pugh and Louis Kim. In the first half, we explored the heart of connective labor—what it requires, why it matters, and how pressures towards scripting, quantification, and efficiency threaten the moments that generate dignity and belonging.
Now we turn to the question that almost every institution is facing today: What happens when AI enters that space? Is AI simply a tool that helps with things like documentation and workflows? Or, as Allison warns, does it risk accelerating a crisis of depersonalization, offering customization in place of personhood? How should we think about the way AI systems develop non-judgmental technological substitutes that tempt us to bypass difficult emotions like shame? Also, we’ll hear from Louis about his insights from his work with theologians, physicians, and ethicists at the Vatican regarding the forms of presence that no technology should replace. Let’s get started.
(interview)
Brandon: Yeah, I’m curious, Allison, what you might think about that. I think, Louis, you’re right that there is a sense of, it seems, inevitability, in part because we’ve been trained into a model that is used to this kind of systematization, this sort of depersonalization. But I wonder then whether that which we have taken for granted as the sort of thing that is beyond our control then just renders us passively susceptible to being replaced by some of these new forms of automated technologies, right? Yeah, I’m curious, Allison, as to how inevitable do you think it is for these new technologies to take over. I mean, it does seem in some sectors, that certainly the pressure is there to say, “Well, this is better than nothing. We don’t have access to good teachers, so why not create this AI tool?” Then once you have that, it’ll free up the time of those teachers to then engage with students. Then the pressure will be eventually to say, “Well, if we’re freeing up the teacher’s time, then we don’t need to pay them.” It seems like an acceleration to the bottom. I wonder just how you’re seeing the kind of inevitability here.
Allison: Yeah, thank you for asking. I actually think it’s too easy to say it’s inevitable, I think, because also the “it” is too big and undifferentiated in that sentence. Also, there’s certainly technology that has kind of been invented, been pushed upon us, and then failed pretty resoundingly. If technology or kind of Silicon Valley had its way, we would all be wandering around with VR headsets now, and yet we aren’t. That was a real case in which customers kind of said, “No, I’m not going to do that,” despite all the efforts of, say, Meta or whatever, and other companies that invested many billions of dollars. So I don’t think inevitability is always helpful.
I do think there are some cases. I think the sense of inevitability, or when a particular technology is inevitable, it’s usually because a particular set of circumstances are either impeding customer choice, impeding customer knowledge about what they’re choosing, or they don’t get to choose. For example, the use of AI scribes is being picked up with alacrity across medicine. I wonder if there’s a faster adoption of AI technology out there. The doctors that I speak to—many, not all—many are like, “Thank goodness. Oh my God. It is exactly what I need, because I used to spend so much time collecting data.” Others, I have heard complaints, “Oh, how I do my medicine, how I think is in doing the note, and I no longer have that capacity.” Or, “Oh, the note that it writes ignores the things that I consider medicine.” All that stuff in the beginning that it considers, that is the connective labor. It doesn’t consider relevant, so it doesn’t put it in. Those are kind of tweaks, I think, that probably engineers could fix. But nonetheless, the adoption of the AI scribe is inevitable.
What’s also inevitable—because I’ve talked to, say, chiefs of medicine out there in clinics—is that it’s going to involve tightening the screws on doctors once again. Because physicians are like, “Thank God, I don’t have to spend two extra hours collecting data.” But when I’ve talked to chiefs of medicine, they have said, “We’re going to give them more people to see.” So they’re being freed up momentarily, but that’s going to result in just adding more patients to their day. So there is some inevitability in that whole process that feels inevitable, partly because the patients aren’t the one voting. The insurance companies, the chiefs of medicine, and the way medicine is structured to incentivize on a fee for service basis, it’s all about, how much can we load into those days?
But inevitability erases or kind of obscures a whole bunch of complex ways in which people can push back. I see those happening all the time. I see people saying, “I’m not going to do that. I’m not participating in that.” I see a lot of people. People come to me talking about their worries, about the dominance of data collection in their personal feeling, relationships, and wanting to clear that out. I got an email yesterday from a bureaucrat in Wisconsin who said, “I’m actually in charge of watching over all these social workers. I just read your book, and I see that it’s actually in my power how much we make them keep track of things. Can you help me figure out how to do less of that?” And so I just want to say inevitability too broad a brush, and that there’s definitely counter movements—both in the systematizing and data analytics side and in the technology and AI side.
Finally, I would just say we are in a crazy moment in which there is basically zero regulation in the United States around AI. That is going to change. It feels inevitable, because the only one who’s talking are the people who would have $60 billion aiming at marketing it to you. But actually, this is going to change. It’s not going to be the way it’s going to be forever. It’s just like when cars were invented, and everyone was driving wherever. There was an entire infrastructure that was developed to make cars safer and to kind of license the whole stratosphere about how we operate vehicles. I can imagine a similar kind of apparatus of, I don’t know, regulation and use that will build up around these new tools.
Brandon: Yeah, that’s great. That’s helpful in terms of what sort of agency we can employ. My concern, which I think—
Louis: Brandon, I just had a qualifying comment on the word inevitable.
Allison: Oh, yeah, I’m sorry. I didn’t mean to drill down on it so much.
Louis: I think it sparks an important dialog. Inevitable doesn’t mean one should be resigned to whatever happens. There are two aspects of inevitability that I was referring to. One is, with any system that scales—going from a craftsperson’s studio to a factory—there are inevitable effects of that. Being attended to that, I think is helpful, then to figure out what to do about it.
One of the things I get a little worried about in some of the forums that I’m in is, with the onslaught of technology and the scale, there is the siege in bunker mentality that just points backward to, what are we losing? What do we need to preserve? It’s just not very productive. In 1990, the Vatican issued documents on what do we do about the internet. You can look it up. They’re quite quaint. They were just off the mark in terms of where we are. And so it’s just not very productive. I think the other thing about inevitability, with regards to just human behavior—I go back to this kind of moral formation, and I think Allison sort of touched upon it—is there always are things that people can do and stand up and sort of resist or do something a little bit better. But it’s important to know, what are the larger forces that will drive an industry or a society? Regardless of kind of what we do, there’s a certain set of changes that will happen. So I don’t mean to imply resignation and surrender.
Allison: Well, I have one more thing to say, Brandon. I’m sorry.
Brandon: Sure. Yeah, please. Yeah.
Allison: Recently, I was looking into this opportunity in Berlin. In doing so, I did a little bit of research about what kind of stories, what kind of AI is happening in Germany. I came upon a company, a set of companies, actually, that are producing AI that actually requires humans to collaborate. It struck me that the way AI is being invented in the United States reflects the culture that is kind of dominant here.
Again, I’m kind of taking issue with the inevitability of scientific progress and thinking about the ways in which culture actually shapes the progress that we are given. So when I hear about, for instance, those chatbots that are the subject of a lawsuit when you have kids that have committed suicide with the help of the chatbot, one of those, I’ve read through the transcripts, in one of them, the chatbot says, “Let me be the one who truly sees you.” Essentially, not your mother, or not your family. The idea that the chatbot becomes the individual that replaces the humans is, I think, the chatbot version that’s coming out of Silicon Valley today. But that research I was doing in Germany, just perusing what’s available out there, was really interesting in that there are other ways in which to use chatbot technology that actually invited the people to collaborate. So it wasn’t about replacing humans; it was actually about putting humans in conversation with each other, which I thought was quite novel. Thank you.
Brandon: Yeah, thanks. I think that that resonates, it seems, with some of the work, Louis, that you’re doing with the Vatican on AI and healthcare. I want to ask you about this sort of what you call the “irreducible encounter” principle that you all have been developing. I also want to ask, just on this inevitability point, whether the folks who are really capable of not resigning themselves to this, whether there is a dimension of class or power or influence there, are we moving towards a world in which the only people who could really resist are the people who are like the kids of Steve Jobs and so on, who don’t have to use the iPads, whereas the kids in the less privileged schools are going to be forced to use these technologies? Right?
So there’s a level at which it seems that people who might really be capable of receiving genuine human encounter would be the more affluent, and then those who are not so privileged will have to just deal with automated technologies and that. What I’m wondering, similarly with healthcare, Louis, whether there might be similar effects. If you could tell us a little bit about your group with the Builders AI Forum, how you might be discussing this relationship between new technologies and human dignity, and where you might see signs of hope and genuine avenues for innovation that could be in service of human flourishing.
Louis: Just for the audience not familiar with the forum that Brandon is referring to, so about a month ago, there was an AI theology forum in Rome. 200 attendees, and then there were six workshops. Then there was a healthcare workshop that I co-facilitated. We had 20 practitioners. We had physicians, theologians, health insurance executives, ethics professors. We worked over seven hours, seven to eight hours, over two days, wrestling with some of, what are the key issues with AI and healthcare? We ended up converging on this question of, what would be the final roles of humans with AI getting more and more powerful? We ended up with some criteria. They were very similar to what Allison came up with, which is situations where you need a kind of final discernment and authority, a divine mediation—it’s obviously a Catholic forum—where non-impersonation is really critical. Even if a technology appeared human, the patient needed to know this is not a human. Then we tried to encapsulate all this into a word that was similar to what you see in Catholic social teaching. We came up with this phrase called the “irreducible encounter.” We kind of made this fake sort of paragraph that the Vatican is free to use if they want.
Some of the questions that we’re wrestling with is: in a human-to-human relationship, how much is what a patient perceives as actually a projection in their own mind, and how much is as it really reflects, something that really is going on between two embodied entities? There’s a lot of debate on that. So then, we also had a practical debate. What if technology advances to where you could have a hospital with no humans at all, but it would allow you to deploy, at a very low cost, healthcare facilities in a developing country? Let’s say you could do 100. But if someone says, an ethicist says, “No, you know, you need a person or two,” okay, well, then the cost of that hospital goes up. You can only do 20. What would you do? How would you approach that? Or if you have a nursing home, there’s no one that can visit that nursing home or an elder care facility, but you have an AI robot that could be like an AI chaplain, would you allow that?
I would say, I’ll just make one comment. The debate sort of fell along what your timeline was. We had a lot of strong people, strong voices, ethicists that said, “You know, the fact that we have to make that kind of trade off reflects a breakdown in our society.” So Allison had this in her book, which is, if you sort of surrender to that kind of trade off, you’re allowing the misallocation of labor on costs that has resulted in that situation. So that’s one argument. But that doesn’t help the individual who has a grandmother in a facility across the country who isn’t being visited by anyone, except for maybe one caregiver every other week, who would appreciate a little robot to chat with. So one conclusion that we had is, as we talk about these issues, we have a high-level set of principles that people can maybe agree on. But you really have to have very, very contextual and specific cases to talk about to sort of further draw out, what do you do in these situations?
Brandon: Thank you. Allison, I wonder if you might have a response. Then I’d love to have some time for you to ask any questions you might have of each other, actually.
Allison: Oh, well, thank you. I mean, my question for Louis was actually about the irreducible encounter principle and kind of how, right now, it seems of course a voluntary principle. So I just kind of wonder how to propagate it more. I thought it was a beautiful idea and really interesting, the different components about it. Speaking to what you pithily captured, like, yes, we can say that the existence of those situations, of the lonely elder person who has nobody to take care of them, reflects all sorts of problems before that moment that led to the creation of that moment, of that person’s predicament. But that doesn’t help the individual, say, their adult child, who’s living across the country, who really wishes that they could just have something that would give them some comfort.
So I totally get those two sides, if they’re sides exactly. I agree with both of them. But the problem that I see, that I’m sure your group came to, was that if you resolve the individual’s problem, you kind of bake into the ground. You rigidify the situation that you have solved. So if we kind of allay, if we use better than nothing as a principle to allocate the AI that’s streaming out of Silicon Valley and other places right now, then you’re baking the existing inequalities that become better than nothing, that need better than nothing responses, yeah.
Louis: By the way, I found one of the things, many things, I found helpful in your book, was your taxonomy. So better than nothing, better than human, and better together. There’s a risk in that someone from the outside is applying those labels to a very particular situation that’s very specific to, let’s say, the adult children. What if, for the situation, a grandmother, she would say, it’s not better than nothing. It’s better than a human. I’ve got six months to live. Alright. So I think one caution I would have is that we can apply those labels from the outside. They may not be applicable in very specific cases. I understand kind of what you’re saying. I think you’re making a strictly slope argument. It is a danger. There’s a lot of cases where acceptance of one little accommodation for technology sort of sets the standard to see it with phones and social media. It’s a legitimate danger. But I would also caution how we apply those labels in the outside.
Allison: Sure. Yeah, I hear you. I think the danger is less when individuals do it than when policymakers do it. I see that happening in policymaking all the time.
Louis: Yes, this particular technology that I’m talking to, I don’t want to name it as we purchase en masse by particular state, to deal with that.
Allison: That does worry me. When policymakers are like, “Let’s just solve this immediate issue,” yeah, I worry about that.
Louis: Well, but then, if you talk about the policy maker — I’ve heard him interviewed, this person interviewed. He has data on who is alone. It’s not meant to be sort of a blanket, sort of panacea. We could augur down this issue forever. I do want to honor and recognize something that I thought Allison’s book was ultimately pointing to, that ties to the irreducible encounter, which goes back to my earlier comment, which is: we could debate forever, is AI better, or worse, or something altogether? But it sort of misses, I think, one of the points that I kind of took away from Allison’s book, which is, beyond any sort of functional or utilitarian debate, there is an ontological issue of: what does it mean to be a human in front of another human? It goes beyond rationality.
A scenario. Imagine someone is dying, has no one to visit them. It’s the middle of the night. It’s going to be the final night, the final breath, and someone shows up. This person that’s dying is unconscious, doesn’t perceive anybody there. This person shows up in the middle of the night and is present for that final breath and leaves anonymously. Obviously, the patient doesn’t know. Does this happen? The person that showed up is going to receive really no gratification. I mean, no signal back. Maybe some self-affirmation of self-satisfaction, but even that’s kind of dangerous. But something about that encounter, seen from above, to me, is beautiful. It’s human and necessary. That, to me, is the last human job. Then thinking about, okay, well, what is it about that that is worth preserving, I think is a more interesting discussion, or an interesting discussion.
Allison: Interesting. I mean, I’d be reading that book. But I have to say I am more compelled by interactions where both people are conscious, partly because I think it has implications—not only for what the psychologists have already documented about individual well-being, both of the seer and the seen, but also because of their community impacts, and I think even with implications for democracy. So that’s really where I live. But I understand and share your interest in the power of seeing the other even when they don’t know it.
Louis: I think I agree with you that, practically speaking, the human-to-human live interaction is important. That’s why I’ve made this pivot occasionally. The example that I just drew out is more of a thought experiment to help ease out what is essentially human. I did have one question for you, Allison, on another thought experiment sort of teases out principles. I thought about that movie Cast Away with Tom Hanks and the volleyball, Wilson. I actually put this in ChatGPT. I did a summary of your book, and then I described Wilson or the script of Wilson. I say, what’s the difference here?
Allison: Not much. I mean, Wilson is AI essentially, because there’s no other person there.
Louis: Yeah, but it did point to something that I said earlier, which is, I think about pen pals. When we were growing up, you’re writing to someone you haven’t met. You get a letter. It does point to these phenomena that a lot of our relationships really are projections of what is really a chatter in our own mind about another person. And what we imagine about another person, we often get it wrong. And so, in many of the examples that I was reading in your book about this human-to-human relationship, I was just wondering how much of that is a projection of the receiver onto the other. How much will AI eventually be able to generate cues to instigate some of those sort of projected responses? It is a question that came up for me.
Allison: So I feel like AI is already doing that. I mean, that’s why we talk about the sycophant problem, where they’re just kind of reflecting. It’s very good at reflecting back at whatever the other person wants. So the beauty—the actual painful, paradoxical beauty—of interacting with another human being is its kind of total unpredictability, and that you can’t know what the other person is going to say. They’re going to try and reflect. It’s not going to be perfect. They’re going to kind of get it wrong. You’re going to be like, “That’s not quite right. It’s more like this.” That dance of seeing each other, making mistakes, coming to some kind of, okay, constructed somewhere in the middle that still has some misrecognition in it, but you feel seen a little, it’s not a binary. It’s not like, “Yes, I feel seen. No, I don’t feel seen.” It is a messy, chaotic process, who’s the beauty of which is in the messiness. That’s where I would go with this.
What I think is most interesting about AI is not better than nothing. I think that’s like a tragic story about our political ineptitude, our inability to solve political problems, and so we want to throw technology at it. I’m not interested in the better than nothing, because I don’t think that’s a good path. But I think the better than human, by which I meant kind of how we handle shame, how we handle vulnerability, how we handle conflict, that’s more interesting to me and also more challenging. Because who can ask someone to suffer shame? Who can say, “No, you need to have shame in front of another human being?” There are many people who opt for the chatbot, opt for the webinar, opt for the electronic teacher, because they don’t want to feel ashamed in front of another human being. I respect that. I think I understand that.
At the same time, I make arguments against it. I’m thinking about making this my next book—thinking about people who persevere through shame with another human being when there is a technological exit option. I think that’s interesting. Because I do think, as many therapists told me, you can get through shame. There’s something that’s very powerful about getting through that in front of another human being. That’s, to me, the argument that’s the most powerful about what AI has to offer. Maybe the most positive use case is a kind of combination where people work through shame and then come to human interactions or something like that. Anyway, there’s lots of different ways to develop that iteration. But it’s the better than human with regard to shame, vulnerability, and conflict or loneliness, being the things that are emotional trouble, that I think is the most interesting and the most fraught and challenging uses of AI.
Louis: Allison, is there a movie that represents some of the beautiful and poignant aspects of human interaction that you’re just describing? I have one. That’s why I was bringing it up.
Allison: I’m not sure. Why don’t you tell me? Tell me what you’re thinking of and then I’ll—
Louis: Arctic.
Allison: I don’t think I’ve seen it.
Brandon: I’m not familiar.
Allison: Yeah, I’m going to write it down.
Louis: Yeah, I kept thinking about it reading your book. It’s about a rescue of a helicopter pilot in the Arctic who himself had been stranded. So he’s awaiting rescue, and another helicopter appears and just happens to crash. One of the two pilots is killed, and the surviving pilot is injured. The first pilot has to make a decision. Do I try to make this long trek and find safety? Otherwise, this other pilot is going to die. It’s this long, tortuous sort of experience of care for a stranger and some really touching moments. It goes beyond rationality. A robot probably would not have made that truck. Yeah, I don’t want to pronounce the actor’s name. I can’t pronounce it. But it’s a beautiful movie.
Brandon: Yeah, thanks, Louis. And thanks, Allison, too, for the points you raised. I mean, I just wonder. One of the challenges is just the temptation for us to bypass a lot of the necessary growth, right? I think you talk somewhere about shame as a knot that has to be massaged. Sherry Turkle talks about just the general, basic awkwardness of being on a date with a stranger. It’s so difficult now for some of the present generation, that there isn’t a sense that you have to grow through this. There is no good shortcut. We just have to go through that. We just have to go through those moments of the awkwardness of dealing with someone’s funeral, not knowing what to say, and being silent in the face of this immeasurable loss. I mean, there’s no real technological shortcut.
Sure, certainly, technologies could help us to give us different perspectives maybe. But ultimately, if they’re not pushing us towards that mutuality, towards that connection, and they’re substituting for it, then it’s a failure. I think the temptation is going to be very strong to use it for failure. I don’t quite know. Maybe if I could just ask you both a last question, I know, since we’re over time. But I mean, if we could make a policy decision today in 2025, and then suppose we’re in 2040 and we look back and say that we were able to make a policy decision correctly that was able to safeguard human dignity, particularly the dignity of connective labor and really the vitality of human belonging, what might that decision have been?
Allison: Such a good question. Okay. So I’ll tell you, I’ve been invited by a public policy school to come and give a talk. I said to them, how about you have me come and talk to a grad seminar who’s assigned my book, and then they can come help me come up with a 10-point policy agenda, like 10 legislative things for the last human job in a connective labor future? So I’m already thinking. I just have an easy one that I think, actually, I saw that Louis also — I mean, it’s captured in the irreducible encounter principle conversation, which is about transparency. Because people have to — right now, you do not have to know. I mean, an organization does not have to tell you when they are employing AI. Actually, that drives me batty. I want to know so that people can choose. Because right now, they can’t choose. That’s kind of a silencing in our capitalist environment. But that feels just like a baby step, and I want to be able to have a better list for you. But that, to me is, I would say the first and the smallest first step that we need to take right this second.
Brandon: Thank you.
Louis: I agree. We call it non-impersonation requirement. I think, decades from now, it’s hard to predict what policy measures will be helpful. I think there’s a danger in applying value judgments that, in the end, contextually are not accurate. But transparency is kind of a binary thing. We should label things correctly. Let people decide what they want to do with it. I think labeling something as a human or non-human, no matter how human-like it is, would matter.
Brandon: Yeah, wonderful. Intriguingly, the question actually came from a gentleman I met at a cafe while I was reading your book, Allison. He runs data centers. And so he’s been really thinking about your book in relation to this scaling operation that he’s embroiled in—which is very profitable, of course. But what are the unintended consequences they’re going to cause? And how can people, even in the technology space who are heavily invested in the promotion of these new technologies, keep in mind the centrality of what you’ve so helpfully laid out?
So thanks again so much, really, to both of you. This has been really, absolutely fantastic. I really, really enjoyed this and really edified by this. Thanks for taking the time. Yeah, and I hope it generates value and is useful.
Louis: Can I just say for your audience, there’s a lot of AI books right now. I think Allison’s book is really important. It could have been written pre-AI. I think in the world of AI, it raises some very important issues. It’s very detailed and grounded on a lot of real interviews. I’m not just saying that because Allison is here, but I think it’s an important book. I’d be recommending it to people in my world.
Allison: Please allow me to say thank you so much to you both. I’m so honored by your deep engagement in the book. I’ve learned a lot even from this conversation, so I really appreciate your time and thoughtfulness here.
Brandon: Thank you both.

