OpenAI's recent paper "How People Use ChatGPT" strategically obfuscates risky usage patterns. More scrutiny and further research on AI chatbot-related harm are urgently needed | Edition #235
You make some good points about how OpenAI frames usage for investor and regulatory optics, but I think the tendency to overstate the dangers are prevalent. Tragic cases exist, yes, but they’re still outliers versus the millions using ChatGPT like any other productivity tool. OpenAI clearly massages categories to minimise liability, but that doesn’t mean there’s a silent epidemic. What’s really needed is independent research that looks at both sides—actual harm rates and actual benefits—without either corporate spin or fear-driven framing.
AI is a tool like any other. It can be used for good or evil like anything else. The problems in the mind, heart, and soul of the user. Unless and until we can perfect them, problems will persist.
That having been said, AI companies should do everything they can within reason to flag and counteract behavior that risks harm to users and others.
I'm confident they will given the massive regulatory and liability risk involved.
I'm sorry George, but recent history says otherwise when it comes to massive companies taking responsibility for their product or as you say: do everything they can within reason to flag and counteract behavior that risks harm to users and others.
The truth is, there is no real threat of massive regulatory and/or liability risk for them, and no incentive for them to change their course on the subject. Companies are profit driven, they weigh risk/reward, especially with a focus on short term gain to drive up market value. They will happily pay out a million dollar settlement, make the recipient sign and NDA, and then continue as before, rather than invest a hundred million or more into adding safeguards to their product. Look at pharma, oil, or any other major industry and you will see what I mean.
Open AI is not going to change anything unless, and until, they are forced to. The current administration has no desire to apply that force, in fact they want to do the opposite.
1/ the HBR paper is based on analysis of Reddit comments so definitely another strong bias towards very specific use cases. So even if OpenAI data is presented in the most possible favorable way - as you perfectly explain - I would still trust that data better.
2/ regarding the specifics of companionship conversation even at that share considering the sheer volume of messages it’s millions of daily messages on that topic ….
You make some good points about how OpenAI frames usage for investor and regulatory optics, but I think the tendency to overstate the dangers are prevalent. Tragic cases exist, yes, but they’re still outliers versus the millions using ChatGPT like any other productivity tool. OpenAI clearly massages categories to minimise liability, but that doesn’t mean there’s a silent epidemic. What’s really needed is independent research that looks at both sides—actual harm rates and actual benefits—without either corporate spin or fear-driven framing.
AI is a tool like any other. It can be used for good or evil like anything else. The problems in the mind, heart, and soul of the user. Unless and until we can perfect them, problems will persist.
That having been said, AI companies should do everything they can within reason to flag and counteract behavior that risks harm to users and others.
I'm confident they will given the massive regulatory and liability risk involved.
I'm sorry George, but recent history says otherwise when it comes to massive companies taking responsibility for their product or as you say: do everything they can within reason to flag and counteract behavior that risks harm to users and others.
The truth is, there is no real threat of massive regulatory and/or liability risk for them, and no incentive for them to change their course on the subject. Companies are profit driven, they weigh risk/reward, especially with a focus on short term gain to drive up market value. They will happily pay out a million dollar settlement, make the recipient sign and NDA, and then continue as before, rather than invest a hundred million or more into adding safeguards to their product. Look at pharma, oil, or any other major industry and you will see what I mean.
Open AI is not going to change anything unless, and until, they are forced to. The current administration has no desire to apply that force, in fact they want to do the opposite.
Thanks for this analysis. 2 comments :
1/ the HBR paper is based on analysis of Reddit comments so definitely another strong bias towards very specific use cases. So even if OpenAI data is presented in the most possible favorable way - as you perfectly explain - I would still trust that data better.
2/ regarding the specifics of companionship conversation even at that share considering the sheer volume of messages it’s millions of daily messages on that topic ….