Does your chatbot know too much? Think twice before you tell your AI companion everything.
17 Nov 2025
•
,
4 min. read

In the movie “Her” the film’s hero strikes up an ultimately doomed romantic relationship with a sophisticated AI system. At the time of its release in 2013, such a scenario was firmly in the realms of science fiction. But with the emergence of generative AI (GenAI) and large language models (LLMs), it’s no longer such an outlandish prospect. In fact, “companion” apps are proliferating today.
However, inevitably there are risks associated with hooking up with an AI bot. How do you know your personal information won’t be shared with third parties? Or stolen by hackers? The answers to questions like these will help you determine whether it’s all worth the risk.
Looking for (digital) love
Companion apps meet a growing market demand. AI girlfriends and boyfriends harness the power of LLMs and natural language processing (NLP) to interact with their users in a conversational, highly personalized way. Titles like Character.AI, Nomi and Replika fill a psychological and sometimes romantic need for those who use them. It’s not hard to see why developers are keen to enter this space.
Even the big platforms are catching up. OpenAI recently said it will soon roll out “erotica for verified adults,” and may allow developers to create “mature” apps built on ChatGPT. Elon Musk’s xAI has also launched flirtatious AI companions in its Grok app.
Research published in July found that nearly three-quarters of teens have used AI companions, and half do so regularly. More worryingly, a third have chosen AI bots over humans for serious conversations, and a quarter have shared personal information with them.
That’s particularly concerning as cautionary tales begin to emerge. In October, researchers warned that two AI companion apps (Chattee Chat and GiMe Chat) had unwittingly exposed highly sensitive user information. A misconfigured Kafka broker instance left the streaming and content delivery systems for these apps with no access controls. That meant anyone could have accessed over 600,000 user-submitted photos, IP addresses, and millions of intimate conversations belonging to over 400,000 users.
The risks of hooking up with a bot
Opportunistic threat actors may sense a new way to make money. The information shared by victims in romantic conversations with their AI companion is ripe for blackmail. Images, videos and audio could be fed into deepfake tools for use in sextortion scams, for example. Or personal information could be sold on the dark web for use in follow-on identity fraud. Depending on the security posture of the app, hackers may also be able to get hold of credit card information stored for in-app purchases. According to Cybernews, some users spend thousands of dollars on such purchases.
As per the above example, revenue generation rather than cybersecurity is the priority for AI app developers. That means threat actors may be able to find vulnerabilities or misconfigurations to exploit. They might even try their hand at creating their own lookalike companion apps which hide malicious information-stealing code, or manipulate users into divulging sensitive details which can be used for fraud or blackmail.
Even if your app is relatively secure, it may be a privacy risk. Some developers collect as much information on their users as possible so they can sell it on to third-party advertisers. Opaque privacy policies may make it difficult to understand if, or how, your data is protected. You may also find that the information and conversations you share with your companion are used to train or fine-tune the underlying LLM, which further exacerbates privacy and security risks.
How to keep your family safe
Whether you’re using an AI companion app yourself or are concerned about your children doing so, the advice is the same. Assume the AI has no security or privacy guardrails built in. And do not share any personal or financial information with it that you wouldn’t be comfortable sharing with a stranger. This includes potentially embarrassing or revealing photos/videos.
Even better, if you or your kids want to try out one of these apps, do you research ahead of time to find the ones that offer the best security and privacy protections. That will mean reading the privacy policies to understand how they use and/or share your data. Avoid any that are not explicit about intended usage, or which admit to selling user data.
Once you’ve found your app, be sure to switch on security features like two-factor authentication. This will help prevent account takeovers using stolen or brute-forced passwords. And explore its privacy settings to dial up protections. For example, there may be an option to opt out of having your conversations saved for model training.
If you’re worried about the security, privacy and psychological implications of your kids using these tools, start a dialog with them to find out more. Remind them of the risks of oversharing, and emphasize that these apps are a tool for profit which don’t have their users’ best interests at heart. If you’re concerned about the impact they may be having on your children, it may be necessary to put limits on screen time and usage – potentially enforced via parental monitoring controls/apps.
It goes without saying that you shouldn’t allow any AI companion apps whose age verification and content moderation policies do not offer sufficient protections for your children.
It remains to be seen whether regulators will step in to enforce stricter rules around what developers can and can’t do in this realm. Romance bots operate in something of a grey area at present, although an upcoming Digital Fairness Act in the EU could prohibit excessively addictive and personalized experiences.
Until developers and regulators catch up, it may be better not to treat AI companions as confidants or emotional crutches.

