Discussion about this post

User's avatar
Jiri Fiala's avatar

You make some good points about how OpenAI frames usage for investor and regulatory optics, but I think the tendency to overstate the dangers are prevalent. Tragic cases exist, yes, but they’re still outliers versus the millions using ChatGPT like any other productivity tool. OpenAI clearly massages categories to minimise liability, but that doesn’t mean there’s a silent epidemic. What’s really needed is independent research that looks at both sides—actual harm rates and actual benefits—without either corporate spin or fear-driven framing.

Expand full comment
George Shay's avatar

AI is a tool like any other. It can be used for good or evil like anything else. The problems in the mind, heart, and soul of the user. Unless and until we can perfect them, problems will persist.

That having been said, AI companies should do everything they can within reason to flag and counteract behavior that risks harm to users and others.

I'm confident they will given the massive regulatory and liability risk involved.

Expand full comment
2 more comments...

No posts