Risky ChatGPT Usage Patterns
OpenAI's recent paper "How People Use ChatGPT" strategically obfuscates risky usage patterns. More scrutiny and further research on AI chatbot-related harm are urgently needed | Edition #235
đ Hi everyone, Luiza Jarovsky here.
Welcome to the 235th edition of my newsletter, trusted by over 78,100 subscribers interested in AI governance, AI literacy, the future of work, and more.
It is great to have you here!
đ Expand your learning and upskilling journey with these resources:
Join my AI Governance Training (yearly subscribers save $145)
Register for our job alerts for open roles in AI governance and privacy
Sign up for weekly educational resources in our Learning Center
Discover your next read in AI and beyond with our AI Book Club
đ„ Join the last cohort of 2025
If you are looking to upskill and explore the legal and ethical challenges of AI, as well as the EU AI Act, join the 25th cohort of my 16-hour live online AI Governance Training in November (the final cohort of the year).
Each cohort is limited to 30 people, and more than 1,300 professionals have taken part. Many described the experience as transformative and an important step in their career growth. *Yearly subscribers save $145.
Risky ChatGPT Usage Patterns
OpenAI has recently released the paper "How People Use ChatGPT," the largest study to date of consumer ChatGPT usage.
Some of the paper's methodological choices, however, strategically obfuscate risky usage patterns, making legal oversight of AI chatbot-related harm more difficult.
Further research is urgently needed to scrutinize AI chatbotsâ usage pattern data and the negative implications AI chatbots can have on users.
-
The goal of the paper is to illustrate ChatGPT's economic benefits, as the company's blog post makes clear:
âWeâre releasing the largest study to date of how people are using ChatGPT, offering a first-of-its-kind view into how this broadly democratized technology creates economic value through both increased productivity at work and personal benefit.â
and also:
âChatGPTâs economic impact extends to both work and personal life. Approximately 30% of consumer usage is work-related and approximately 70% is non-workâwith both categories continuing to grow over time, underscoring ChatGPTâs dual role as both a productivity tool and a driver of value for consumers in daily life. In some cases, itâs generating value that traditional measures like GDP fail to capture.â
Focusing on the economic benefits helps OpenAI justify the enormous investments it has been receiving and supports its continuous expansion efforts, which are extremely broad and include national and government projects (such as Stargate UAE and OpenAI for Government), universities (ChatGPT Edu), and toys.
Additionally, the study's chosen methodology and framing of ChatGPT use cases strategically help OpenAI downplay and minimize risky AI ChatGPT usage, especially those related to therapy, companionship, affective, and other forms of intensive personal usage, which have led to suicide, murder, AI psychosis, spiritual delusions, and more.
-
First, let me explain why OpenAI is interested in downplaying or obfuscating risky ChatGPT usage.
Not only do they generate negative headlines for OpenAI (which might affect its profits and its ability to close more deals and partnerships), but they also significantly increase the company's legal risks, which could potentially put it out of business. How?
First, from an EU data protection perspective, to benefit from the "legitimate interest" provision in the GDPR, a company must demonstrate, among other things, that the data processing will not cause harm to the data subjects involved. Mainstream usage patterns that lead to suicide, self-harm, unhealthy emotional attachment, and psychoses would complicate this claim.
Second, these ChatGPT-related harms create a major liability problem for OpenAI, which is already being sued by one of the victim's families (Raine vs. OpenAI). If the risky use cases are mainstream (and not exceptions), judges are likely to side with the victims, given the company's lack of built-in safety provisions.
Third, last week, the U.S. Federal Trade Commission issued 6(b) orders against various AI chatbot developers, including OpenAI, initiating major inquiries to understand what steps these companies have taken to prevent the negative impacts that AI chatbots can have on children. OpenAI is interested in downplaying risky usage to avoid further scrutiny and enforcement.
Fourth, Illinois has recently enacted a law banning the use of AI for mental health or therapeutic decision-making without oversight by licensed clinicians. Other U.S. states and countries are considering similar laws. OpenAI wants to downplay these use cases to avoid bans and further legal scrutiny.
Fifth, risk-based legal frameworks, such as the EU AI Act, treat AI systems like ChatGPT as general-purpose AI systems, which are generally outside of any specific high-risk category. If it becomes clear that risky use patterns are extremely common (and cases of harm are growing), it could lead to risk reassessments, increase the compliance burden.
-
Now, back to the paper.
It strategically minimizes risky use cases, especially AI therapy, companionship, and similar affective and intensive personal usage patterns. How?
First, it names the category "Relationships and Personal Reflection," which the authors strategically highlight as accounting for only 1.9% of ChatGPT messages:
â(...) we find the share of messages related to companionship or social-emotional issues is fairly small: only 1.9% of ChatGPT messages are on the topic of Relationships and Personal Reflection (...) In contrast, Zao-Sanders (2025) estimates that Therapy/Companionship is the most prevalent use case for generative AI."
*The Zao-Sanders study referred to by the OpenAI paper can be found here (I have mentioned this study a few times in this newsletter), and it states that therapy and companionship, âorganizing my life,â and âfinding purposeâ are the three most prevalent uses of ChatGPT in 2025.
Strategically framing messages related to companionship or social-emotional issues as âRelationships and Personal Reflectionâ will likely exclude the majority of risky usage patterns in this context. Why?
As we learned from recent tragedies involving AI chatbots, such as the Florida teenager's suicide last year, Adam Raine's suicide this year, and in other documented cases of intense emotional attachment, the victims used AI chatbots every day, for various hours a day, and often as a replacement for other human interactions (friends, partner, parents, doctors, therapists etc).
The conversations were not necessarily about relationships or explicitly focused on personal reflection.
In the Raine's family lawsuit, for example, we learn that before Adam Raine committed suicide, he asked ChatGPT if the noose he was building could hang a human (screenshot from the family's lawsuit below):
This extremely risky interaction, which should never have happened, would likely have fallen into either "specific info" or "how to advice" in the context of the OpenAI paper terminology. However, this interaction is directly related to companionship and social-emotional issues, and is a risky usage pattern in that context.
OpenAI's paper points out that approximately 30% of consumer usage is work-related and approximately 70% is non-work.
Within both these categories, there are probably millions of interactions that are potentially risky, especially given usage intensity (daily use, various hours a day), the type of user (children and teenagers, emotionally unstable and vulnerable people), and the intention/context (interacting with ChatGPT as a replacement for other human and social interactions).
As I wrote above, this OpenAI paper was written with the goal of showing investors, lawmakers, policymakers, regulators, and the public how 'democratizing' and 'economically valuable' ChatGPT is.
To properly govern AI chatbots, we need more studies (preferably not written by AI chatbot developers) that explore the various types of AI chatbot usage patterns, including risky ones, and the ones more likely to cause harm.
We need a much more nuanced understanding of the model specs and features that might lead to more dependence, emotional attachment, manipulation, or distortions, and the avoidance of healthy human and social interactions.
Certain usage patterns can be extremely risky, especially when children, teenagers, and other vulnerable groups are involved. AI companies must acknowledge that (and not pretend that AI chatbots are harmless productivity boosters).
Only then will we be able to develop stronger AI safety and AI governance mechanisms and policies that will more effectively protect people against AI harm.
Until then, we will remain in the AI chatbot Wild West.
You make some good points about how OpenAI frames usage for investor and regulatory optics, but I think the tendency to overstate the dangers are prevalent. Tragic cases exist, yes, but theyâre still outliers versus the millions using ChatGPT like any other productivity tool. OpenAI clearly massages categories to minimise liability, but that doesnât mean thereâs a silent epidemic. Whatâs really needed is independent research that looks at both sidesâactual harm rates and actual benefitsâwithout either corporate spin or fear-driven framing.
AI is a tool like any other. It can be used for good or evil like anything else. The problems in the mind, heart, and soul of the user. Unless and until we can perfect them, problems will persist.
That having been said, AI companies should do everything they can within reason to flag and counteract behavior that risks harm to users and others.
I'm confident they will given the massive regulatory and liability risk involved.