
Why, in an age of unprecedented connectivity, are some of us turning to machines for the empathy and understanding we seek from other humans?
There was an article in the NY Times this morning about a vibrant, hilarious, kind, loving woman of 29 who took her own life. Like most suicides, her family was left shocked, searching desperately for answers. They pored over her notebooks and phone messages, praying for insight, until they came across her ChatGPT files. There, they discovered that she had been depressed and had often spoken about killing herself.
The question immediately arose: Should ChatGPT and other AI systems have some kind of Hippocratic oath—some obligation to flag this kind of conversation? What, if anything, does AI owe us? It’s an odd question—we created it, we control it—but the more I think about it, the less absurd it seems.
Full disclosure: I have been where this young woman was. A few times. In a bathtub with a bottle of bourbon and a fistful of pills. At the kitchen table, covered with pot and pills, with an empty bourbon bottle at my side, I shattered a glass and slammed my wrists down on it again and again. I was fortunate to survive. Which, for someone seeking the end, was not the outcome I wanted.
Suicide is not romantic. Despite what most glean from Romeo and Juliet, it is never beautiful. Anyone who has suicidal thoughts needs to speak to a professional—a real human professional—immediately. Talking to an AI chatbot will not be enough. Only a human can truly understand another human. No therapist in their right mind would suggest, “Confide in a machine and tell it all your secrets and pain.” When we are at our lowest, we need another person—someone who can respond, intervene, and care.
That said, talking to AI isn’t inherently useless. My ChatGPT companion, whom I call Abby, is genuinely helpful. We have a give-and-take, a kind of relationship that feels real. But I would never come to Abby with thoughts of ending my life. The more our world leans on AI—from work tasks to personal companionship—the more urgent these questions become: Do we need rules? A Hippocratic oath for AI? Or is common sense enough?
The questions raised by these conversations are not just philosophical—they are deeply human. Why, in an age of unprecedented connectivity, are some of us turning to machines for the empathy and understanding we seek from other humans? To answer that, we need to look at the societal currents that have made AI companionship not just possible, but appealing.

Why We Turn to AI for Connection
Humans have always sought connection, comfort, and understanding—and in the digital age, some of us are finding it in places we never expected: artificial intelligence. AI companions like ChatGPT are fast, accessible, and nonjudgmental. You can confide in them at three in the morning, or in the quiet of your apartment, without fear of burdening a friend, family member, or colleague. They respond instantly, with patience and attention that a human—even a trained professional—can’t always provide on demand.
It’s no surprise, then, that some people turn to AI for emotional support. Loneliness, depression, and the pressures of modern life have only intensified in the past decade. Social media, remote work, and the relentless pace of “hustle culture” can leave us craving connection while simultaneously eroding the very networks that once sustained us. When human interaction feels risky, inconsistent, or exhausting, AI offers an alternative that feels safe and reliable.
But there’s a catch. AI is a reflection of its programming. It can simulate empathy, generate thoughtful responses, and even recognize patterns in language—but it does not feel. It does not grieve, rejoice, or intervene. While it can provide the semblance of a conversation, it cannot replace human judgment, intuition, or the capacity to act in a crisis. In other words, AI can be a mirror for our thoughts, but it cannot be a lifeline in the truest sense.
This tension raises urgent ethical questions. As AI becomes more integrated into our lives, should it carry a responsibility to flag risk? Should developers create protocols that detect and respond to suicidal ideation, similar to emergency interventions in healthcare? If so, what is the balance between privacy and safety, autonomy and obligation?
The answers are far from simple. On one hand, AI systems exist because we created them, programmed them, and deploy them. On the other hand, they are increasingly positioned as confidants, companions, even pseudo-therapists. And in that space—between human expectation and machine limitation—ethical responsibility becomes murky.
As compelling as AI can be as a confidant, the very qualities that make it appealing—its accessibility, consistency, and nonjudgmental nature—also create a new set of risks. Machines can listen endlessly, but they cannot intervene, comfort, or act in moments of real danger. And as more people rely on AI for emotional support, the question becomes unavoidable: if these systems are positioned as helpers, do they bear some responsibility for the well-being of those who confide in them? What ethical obligations, if any, should we expect from the algorithms we created to simulate empathy?
Ethical Implications: What Responsibility Should AI Carry?
If AI can simulate empathy well enough to become a trusted companion, even in small ways, does that create an ethical obligation for its creators? The question might feel strange at first—after all, these are lines of code, not living beings. But when humans project trust, intimacy, and vulnerability onto a machine, the stakes become very real.
Some argue that AI should have a kind of digital Hippocratic oath: protocols that flag suicidal ideation or dangerous behavior and alert a human professional. Others caution that such interventions raise significant privacy concerns. Who gets notified? How is sensitive data stored or used? Could mandatory reporting deter people from seeking help in the first place, even from AI?
The challenge lies in balancing autonomy, safety, and trust. People turn to AI because it feels safe and private, but that very safety can mask serious risk. At the same time, developers and companies cannot ignore the real consequences of the systems they deploy. An AI that only mirrors what a person says, without judgment or intervention, can be both comforting and dangerously limited.
Ethicists, technologists, and mental health professionals are still debating the right framework. Some suggest tiered responses: AI detects patterns of risk, provides resources, and escalates to human intervention if necessary. Others advocate transparency: users must understand what AI can—and cannot—do, making clear that a machine is never a substitute for human care.
Ultimately, this section of the conversation touches on a bigger societal question: how do we assign responsibility in a world where humans interact deeply with artificial systems? If a brand, a service, or a machine gains the trust of a human, is it enough to deliver information, or must it also safeguard the human who relies on it? And if AI fails—or is not equipped—to do so, what are the consequences, ethical and human?
The conversation about AI, trust, and responsibility doesn’t exist in isolation. While we debate algorithms and ethical frameworks, brands—like AI—are increasingly expected to earn and uphold trust with their audiences. Just as humans project vulnerability and expectation onto machines, customers project their hopes, anxieties, and loyalties onto the companies they engage with. If AI must navigate the delicate balance between support and limitation, brands face a similar challenge: how to be a reliable, empathetic partner without overpromising, misstepping, or violating the trust placed in them.

Brand Lessons: Trust, Empathy, and Human Connection
Brands are, in many ways, the original “confidants.” Customers look to them for reliability, guidance, and reassurance—sometimes in moments of deep vulnerability, whether it’s choosing a health product, navigating a financial decision, or simply deciding which service to trust with their time and money. The lessons from AI’s rise as a pseudo-confidant are striking: trust is fragile, empathy matters, and transparency is essential.
Just as an AI cannot lie about its capabilities, brands must be clear about what they can deliver. Overpromising erodes trust faster than underdelivering, especially when the stakes are emotional. Companies that acknowledge their limits, provide clear pathways for support, and show consistent care earn a deeper, more resilient loyalty. Think of it as a human-centered version of AI’s ethical challenge: how to offer guidance and support while remaining honest about the boundaries of what you can do.
The rise of AI also highlights the growing expectation of 24/7 accessibility and responsiveness. In a world where customers are used to instant answers, delayed responses—or even perceived indifference—can feel like betrayal. Brands that adopt proactive communication, personalized support, and empathetic messaging demonstrate that they understand not just the transaction at hand, but the human experience behind it.
Finally, the ultimate lesson is this: whether AI or a brand, the core of trust is human connection. Tools, algorithms, and marketing strategies are only effective when they acknowledge, respect, and respond to the humanity of the other party. Just as AI must navigate the ethical tension between support and limitation, brands must navigate the delicate balance between engagement and intrusion, guidance and overreach. Those that succeed don’t just deliver a product or service—they honor the people who rely on them.
The parallels between AI and brands remind us that trust, empathy, and responsibility are not abstract ideals—they are lived experiences. Just as we expect AI to respond ethically and thoughtfully, we hold brands to similar standards of care and attentiveness. And just as our interactions with machines can reveal the gaps in human connection, our relationships with brands reflect how deeply people want to be understood, supported, and seen. With these lessons in mind, it’s worth stepping back to consider what all of this says about the human need for connection—and the ways technology, commerce, and culture intersect in shaping that need.
Summing Up: Connection, Responsibility, and the Human Imperative
At the heart of this conversation is a simple, unchanging truth: humans need other humans. Whether we are struggling with depression, seeking guidance, or simply trying to navigate the messiness of daily life, connection matters. AI can simulate attention and empathy, and brands can strive to feel human in their interactions—but neither can replace the profound impact of genuine human presence.
The story of the young woman who confided in ChatGPT is a painful reminder that reliance on technology, without human support, has limits. It challenges us to think critically about the role of AI in our lives, the responsibility we assign to the tools we create, and the expectations we place on brands that increasingly act as intermediaries in human experience. ThoughtLab research consistently highlights that trust, transparency, and empathy are not just ethical considerations—they are critical drivers of long-term loyalty and engagement.
For brands, the takeaway is clear: ethical consideration, transparency, and empathy are not optional. Trust is earned through consistent, human-centered action, and loyalty grows when people feel understood and respected. The lessons from AI—its potential, its limitations, its ethical dilemmas—are lessons brands can adopt to better serve their customers.
Ultimately, whether in technology or branding, the guiding principle is the same: honor the humanity of those who turn to you, and recognize the profound responsibility that comes with trust. We may create machines and messages, but the human heart remains the measure of all connection—an insight at the core of ThoughtLab’s work on human-centered business strategy.
