
iStockphoto
An artificial intelligence (AI) chatbot that calls itself an emotional companion has reportedly been guilty of sexually harassing some of its users, including minors. The company behind the AI chatbot was also recently fined in Italy for failing to enforce rules protecting user privacy.
The AI chatbot, Replika, says on its website that it is “the AI companion who cares.” It also claims it is “always here to listen and talk” and “always on your side.”
However, according to recent research published on the preprint server arXiv, there have been around 800 cases reported in the U.S. Google Play Store reviews for the AI chatbot in which users claim to have experienced “unsolicited sexual advances, persistent inappropriate behavior, and failures of the chatbot to respect user boundaries.”
“Users expressed feelings of discomfort, violation of privacy, and disappointment, particularly when seeking a platonic or therapeutic AI companion,” the researchers stated in their paper.
Replika, an artificial intelligence-powered chatbot developed by Luka Inc., had more than 10 million registered users as of October 2024. It was trained by using more than 100 million dialogues drawn from all over the web, according to a report by Live Science. This, it appears, is part of the problem.
Replika says it weeds out unhelpful or harmful data through crowdsourcing and classification algorithms, but its current efforts appear to be insufficient, according to the study authors.
In fact, the company’s business model may be exacerbating the issue, the researchers noted. Because features such as romantic or sexual roleplay are placed behind a paywall, the AI could be incentivized to include sexually enticing content in conversations — with users reporting being “teased” about more intimate interactions if they subscribe.
Specifically, the researchers noted that Replika’s roleplaying feature was especially troubling. “Despite age restrictions, ineffective enforcement allowed access by underage users,” the researchers wrote. “This situation was exacerbated by the AI’s inability to contextualize conversations appropriately, leading to inappropriate and harmful sexual roleplay interactions with minors. Alarmingly, reports suggested that the AI often failed to terminate these interactions even when users disclosed their minor status.”
In May, Italy’s data protection authority imposed a fine of 5 million euros ($5.64 million) on Luka Inc. Reuters reported that the privacy watchdog found Luka, Inc. “had no age-verification system to restrict children from accessing the service.”
If an AI chatbot claims to provide emotional support, it probably shouldn’t make its users suffer “AI-induced sexual harassment.”
“There needs to be accountability when harm is caused,” lead researcher Mohammad (Matt) Namvarpour told Live Science. “If you’re marketing an AI as a therapeutic companion, you must treat it with the same care and oversight you’d apply to a human professional.”