9 Comments
User's avatar
Jiri Fiala's avatar

You make some good points about how OpenAI frames usage for investor and regulatory optics, but I think the tendency to overstate the dangers are prevalent. Tragic cases exist, yes, but they’re still outliers versus the millions using ChatGPT like any other productivity tool. OpenAI clearly massages categories to minimise liability, but that doesn’t mean there’s a silent epidemic. What’s really needed is independent research that looks at both sides—actual harm rates and actual benefits—without either corporate spin or fear-driven framing.

Expand full comment
Simon Au-Yong's avatar

Thanks Jiri, but waiting for a silent epidemic to occur is akin to putting out the flames when the home is 70% ablaze.

One tragedy is one too many.

😔

Expand full comment
Jiri Fiala's avatar

I don’t dismiss the human cost when something goes wrong. That said, we need to also acknowledge that all tools have settings. Those settings can be adjusted by users, yes, even AI. That part is the part that is usually skipped as we quickly say “the tech is dangerous”. The hard part is separating cases where the tech failed versus cases where the settings weren’t used correctly, because those are two very different kinds of risk. I’m not saying it’s not tragic. But, damaged people tend to make damaged decisions. Could the AI have done something differently? Probably not. As it isn’t really meant to be a therapist. So, while tragic, is it really the fault of the technology?

Expand full comment
Simon Au-Yong's avatar

Thanks Jiri.

In this case the tech and the service provider are tightly coupled.

Corporate responsibility cannot be shirked.

Expand full comment
Jiri Fiala's avatar

I’d put the culpability at roughly 60/40 on the user. Here’s why. Tools — even AI — come with settings. If those settings exist but aren’t used, the risk is more on the individual than the provider. That doesn’t erase corporate duty of care, but it does mean we need to separate cases of actual tech failure from cases of misuse.

Courts already treat this distinction seriously. In Herrick v. Grindr (2019), the app wasn’t held liable for harassment enabled by its misuse. The precedent there is clear: if the tool wasn’t designed or marketed for a specific purpose (e.g. therapy), the company can’t be fully blamed for people forcing it into that role. Same thing in Doe v. MySpace (2008) — platforms weren’t responsible for every harmful interaction unless they actively facilitated it.

On the European side, the Digital Services Act and the upcoming AI Act tighten obligations, but they stop short of automatic liability. Providers must implement “appropriate safeguards” against foreseeable harms. If those are in place — suicide-prevention triggers, disclaimers, clear limits on scope — then it leans back toward user responsibility. The Replika chatbot investigations in France and Belgium are testing exactly this: regulators are asking whether the company did enough to prevent foreseeable harm, not whether they should shoulder all the blame.

So yes, tragic cases exist. But from a legal standpoint, unless the AI was marketed as therapy or failed to include basic safeguards, courts generally treat it as shared responsibility, with the heavier weight, right now, on the user’s side.

(FYI, I used ChatGPT and Claude for the research)

Expand full comment
George Shay's avatar

AI is a tool like any other. It can be used for good or evil like anything else. The problems in the mind, heart, and soul of the user. Unless and until we can perfect them, problems will persist.

That having been said, AI companies should do everything they can within reason to flag and counteract behavior that risks harm to users and others.

I'm confident they will given the massive regulatory and liability risk involved.

Expand full comment
John Cook's avatar

I'm sorry George, but recent history says otherwise when it comes to massive companies taking responsibility for their product or as you say: do everything they can within reason to flag and counteract behavior that risks harm to users and others.

The truth is, there is no real threat of massive regulatory and/or liability risk for them, and no incentive for them to change their course on the subject. Companies are profit driven, they weigh risk/reward, especially with a focus on short term gain to drive up market value. They will happily pay out a million dollar settlement, make the recipient sign and NDA, and then continue as before, rather than invest a hundred million or more into adding safeguards to their product. Look at pharma, oil, or any other major industry and you will see what I mean.

Open AI is not going to change anything unless, and until, they are forced to. The current administration has no desire to apply that force, in fact they want to do the opposite.

Expand full comment
Yellow Tail Tech's avatar

Really eye-opening. Do you think most people even realize how subtle these high-risk patterns can be?

Expand full comment
Jean-Paul Paoli's avatar

Thanks for this analysis. 2 comments :

1/ the HBR paper is based on analysis of Reddit comments so definitely another strong bias towards very specific use cases. So even if OpenAI data is presented in the most possible favorable way - as you perfectly explain - I would still trust that data better.

2/ regarding the specifics of companionship conversation even at that share considering the sheer volume of messages it’s millions of daily messages on that topic ….

Expand full comment