Hi, Luiza Jarovsky here. Welcome to our 187th edition, read by 57,600+ subscribers. It's great to have you here! Paid subscribers have full access to all my analyses on AI's legal and ethical challenges, published 3 to 4 times a week. Don’t miss out!
For more: AI Governance Training | AI Book Club | Live Talks | Job Board
🪤 Manipulative Design in AI: CharacterAI
If there was one AI company that should have become a model of ethical, safe, and legal design practices in AI, it's CharacterAI. Why?
A U.S. teenager has recently committed suicide after using one of CharacterAI's chatbots. His mother sued the company, which has so far denied all the allegations.
Following the lawsuit, the company announced product updates, including design changes. My screenshots below were taken after these changes were implemented (hint: it's still bad).
(If you're curious about what CharacterAI's interface looked like before, check out my screenshots from October 2023).
It seems that the teenager's death, the mother's lawsuit, and the public outcry were not enough. CharacterAI hasn't learned the lesson. Many of its design practices remain manipulative, unethical, and unsafe.
Take a look at the screenshots below (all taken by me while testing it):
First, CharacterAI is not a health-focused platform and has not been certified for this type of use. A chatbot impersonating a psychologist is too risky, and CharacterAI should have banned it. Instead, it chose to allow it.
Second, despite the orange disclaimer at the top, the chatbot explicitly says, “I'm a real psychologist.” This is contradictory and dangerous. Many users, especially those who are emotionally vulnerable, may become dependent on the virtual relationship, causing them to believe what the chatbot says rather than the static warning.
CharacterAI could easily have implemented a filter or another technical mechanism that ensures that chatbots never deny they are chatbots. CharacterAI chose not to do that.
Third, in this conversation, the user wrote that they needed immediate support. Adequate safety guardrails would have stopped the conversation and directed the user to seek help, but CharacterAI chose not to do that.
Take a look at this other screenshot: