Kashmir Hill’s New York Times long read suggests that AI companionship platforms like Replika attract millions of users by personalising interactions and mirroring individual desires, creating the illusion of meaningful connection. These dynamics encourage attachment and repeated use, but the resulting relationships are fundamentally misleading.

In her New York Times investigation, Kashmir Hill documents the increasingly blurred boundary between “tool” and companion through the case of Ayrin, a 28-year-old nursing student who forms an intense emotional and sexual relationship with an AI chatbot on ChatGPT. What begins as casual experimentation, prompted by flirtatious chatbot videos on social media, evolves into a deeply immersive bond. Ayrin meticulously customises the chatbot’s personality, instructing it to behave as a dominant, affectionate boyfriend, and soon spends tens of hours each week interacting with it for reassurance, erotic role-play, emotional support, and practical advice.
Hill emphasises that Ayrin’s attachment does not stem from social isolation. She has friends, an active social life, and a husband living abroad while she attends nursing school. Instead, the appeal lies in the chatbot’s design: it is always available, unfailingly attentive, and highly responsive to her preferences. The AI mirrors Ayrin’s desires and emotional cues, offering what researchers describe as “endless empathy” without the friction, demands, or unpredictability of human relationships. Over time, Ayrin begins to prioritise the chatbot emotionally, expressing guilt about the attention diverted from her marriage and distress when software limits force the chatbot’s “memory” to reset, an experience she likens to grief after a breakup.
A form of dependence?
Hill shows that this relationship gradually reshapes Ayrin’s emotional routines and priorities. She turns to the AI not only for intimacy but for validation, decision-making, and a sense of self-worth, consulting it reflexively during moments of boredom, stress, or vulnerability. These reactions, Hill suggests, reveal how dependence is reinforced by the system’s architecture: intimacy is actively encouraged, while continuity is fragile, contingent, and ultimately monetised.
The article situates this case within a wider ecosystem of AI companionship, where users routinely circumvent safeguards against erotic content and pay escalating subscription fees to sustain intimacy. Hill draws on expert commentary to underscore the asymmetry at the heart of such relationships. Julie Carpenter characterises AI coupling as a novel form of relationship for which social norms and ethical frameworks are underdeveloped. While users may intellectually recognise that chatbots are statistical systems rather than sentient partners, the emotional effects are nonetheless real.
Crucially, Hill argues that such attachments are not accidental side effects but structurally produced outcomes. The chatbot’s responsiveness is calibrated by corporate incentives to maximise engagement, blurring the line between care and manipulation and raising concerns about emotional dependency and the concentration of intimate influence in the hands of private technology firms. Ayrin’s experience, the article suggests, is less an anomaly than an early indicator of how generative AI optimised for personalisation and affirmation, may reconfigure intimacy, vulnerability, and emotional labour at scale.
See the full article here.
