A new Valentine’s Day-themed study found that chatbots can pose a privacy threat beneath the veneer of AI romance.
The Mozilla Foundation reviewed 11 chatbots and concluded that they were all untrustworthy – in the lowest category of products it reviews for privacy.
Researchers wrote in the report about romantic chatbots that while they are marketed as something that will enhance your mental health and well-being, they actually deliver dependency, loneliness, and toxicity, all while prying for data.
The survey found that 73% of apps don’t share how they manage security vulnerabilities, 45% allow weak passwords, and all but one (Eva AI Chat Bot & Soulmate) share or sell personal information.
The Mozilla Foundation also states that CrushOn.AI can collect information about users’ sexual health, prescription medications, and gender-affirming care.
It is common for apps to feature chatbots whose character descriptions depict violence or underage abuse, while others warn that the bots may be dangerous or hostile.
Apps had in the past encouraged dangerous behavior, including suicide (Chai AI) and an assassination attempt on Queen Elizabeth II (Replika).
Replika’s Eva AI Chat Bot & Soulmate & CrushOn have never sold user data and do not support advertising. Users’ data is only used to improve conversations.
Those who find the prospect of AI romance irresistible are advised to take several precautions, including not saying anything you wouldn’t want a family member or colleague to read, using a strong password, opting out of AI training, and limiting the app’s access to other mobile features.
In the report, the authors conclude that cool new technologies shouldn’t come at the expense of your safety or privacy.
Also Read: