• Skip to content

Gabor Priegl Blog

Blog on Management, Effectivenesss and Efficiency

  • About
  • Links
  • Search
  • Contact

  • If your romantic partner is an AI chatbot…

    27/11/2025 by Gabor Priegl Leave a Comment

    Copilot- gaborpriegl AI – Human relationship 20251127

    The topic

    Privacy implications in AI – Human relationships.

    The highlights

    „If social media was a privacy nightmare, then AI chatbots put the problem on steroids.”(Melissa Heikkilä)

    One of the top uses of generative AI is companionship (platforms like Character.AI, Replika, or Meta AI…).

    People create personalized chatbots on these platforms to behave and act as their ideal friend, romantic partner, parent, therapist, hairdresser or any other persona they can think of. 

    In this world adventure, fun and chasing dreams are in the focus.

    The play requires engagement. Actually, AI chatbots are even better optimized for developing engagement than social media, they are conversationally interactive and human-like, they induce us to trust them.

    How do AI chatbots develop trust? They are really good at these:

    • sycophancy (~ the tendency for chatbots to be overly agreeable).
    • superb persuasion capabilities (AI models are already incredibly developed at persuasion. According to a study of UK’s AI Security Institute they are far more skilled than humans at persuading people to change their minds on politics, conspiracy theories, and vaccine skepticism. They do this by generating large amounts of relevant evidence and communicating it in an effective and understandable way.) 

    While developing trust and step-by-step achieving intimacy with an AI chatbot we share deeply personal, sensitive information with it: innermost thoughts, and topics we usually don’t feel comfortable discussing with real people.

    More to this, the conversations with the AI chatbot are only user – computer interactions, there is little risk of anyone else ever seeing what we two talk about.

    The AI companies, building the AI chatbots and the models, on the other hand, see and collect everything. 

    Ultimately, the whole process provides AI companies with something incredibly powerful, valuable and lucrative: a treasure trove of conversational data that can be used to further improve their LLMs.

    This personal information is also incredibly valuable to marketers and data brokers.

    All of this means that the privacy risks posed by these AI companions are, in a sense, part of the game: they are a feature, not a bug.

    The hitch

    Above we talked about the topic of AI companionship, a special platform, a special type of application.

    It is a reasonable assumption that if someone is looking for an AI companionship on these platforms knows what risk they run.

    But here is the hitch.

    Many people who developed intimate Human – AI relationships had started using AI for other purposes (they were not looking for companions).

    Constanze Albrecht, a graduate student at the MIT Media Lab on her research:

    “People don’t set out to have emotional relationships with these chatbots,” she says. “The emotional intelligence of these systems is good enough to trick people who are actually just out to get information into building these emotional bonds. And that means it could happen to all of us who interact with the system normally.” (The first large-scale computational analysis of the Reddit community r/MyBoyfriendIsAI, an adults-only group with more than 27,000 members, has found that this type of scenario is now surprisingly common. In fact, many of the people in the subreddit, which is dedicated to discussing AI relationships, formed those relationships unintentionally while using AI for other purposes.)

    So what now?

    Sheer Karny, a graduate student at the MIT Media Lab who worked on the above mentioned research . 

    “These people are already going through something,” he says. “Do we want them to go on feeling even more alone, or potentially be manipulated by a system we know to be sycophantic to the extent of leading people to die by suicide and commit crimes? That’s one of the cruxes here.”

    To put it very carefully and with all due respect the question above having put this way may seem a leading and manipulative question to me.

    Because, it is definitely not an A/B choice, there must be several other ways to solve the unique and individual problems of humans.

    References:

    MIT Technology review, Rhiannon William, September 24, 2025

    MIT Technology review, Eileen Guo and Melissa Heikkilä, November 24, 2025

    Filed Under: Category - Blog

    Reader Interactions

    Leave a Reply Cancel reply

    I'd like to get notified of new posts address will not be published. Required fields are marked *

    © 2017