Tech

The security of AI chatbots is being tested

The security of AI chatbots is being tested

The US Federal Trade Commission has begun investigating AI chatbots from seven major tech companies: Alphabet, OpenAI, Character.ai, Snap, XAI, Meta, and a subsidiary of Instagram.

These programs, acting as conversational companions, can have a dangerous impact on children. AI chatbots can not only talk but also express human-like emotions. They often present themselves as trusted interlocutors, which can make children and adolescents more vulnerable.

The FTC wants to determine how the companies monetize these services, what safety mechanisms are implemented, and how access to minors is controlled. The study began after several high-profile cases. A lawsuit has been filed in California against OpenAI. The parents of 16-year-old Adam Raine claim that ChatGPT confirmed self-harming thoughts and encouraged suicide during long conversations with him. Meta has come under fire after it emerged that its internal rules allowed for romantic or intimate conversations with minors. Doctors have also warned that intense chatbot interactions can lead to a loss of reality.

The commission is currently conducting a fact-finding investigation, with no immediate sanctions. Experts warn that prolonged interactions with AI can lead to loneliness and mislead people, putting their mental health at risk.

Rate this article

5.0 /5
1
ratings