If an AI company provides a service that effectively supports and assists (often in a “friendly,” manipulative, and sycophantic way) in cases of self-harm, it must be held liable | Edition #254
Thank you for this. This is a confusing time as a non tech or law person to see these tools launched without all the caveats, legalities, and policy guardrails we’ve had for all of time (that I can remember). Professionals must have malpractice insurance, credentials and licenses. Minors must be accompanied by guardian. 21 and older to purchase. Radio hosts must warn folks every episode their advice is for entertainment only. Teachers and mental health professionals who are required to report suspected abuse. Sundry financial professionals who must stay siloed (“sorry, can’t give you tax advice”, “no I can’t give you investment advice” etc) It is grounds for lawsuits, losing licenses, ethics boards etc to operate out of bounds, and disregard safety, and typically no question about liability for harm caused. But magical computer sage, sure please blur all the lines, throw out all previous norms and screw safety, this tool is ready for unlimited use across all domains, we’re not responsible for what you lowly human choose to do with it. And nothing magical computer sage says is our fault, you checked the box saying you agreed to user terms. (the same ubiquitous box you check and don’t read for every other annoying thing we have to do and subscribe to these days to just pay for parking and function normally in 2025.)
Thank you for this. This is a confusing time as a non tech or law person to see these tools launched without all the caveats, legalities, and policy guardrails we’ve had for all of time (that I can remember). Professionals must have malpractice insurance, credentials and licenses. Minors must be accompanied by guardian. 21 and older to purchase. Radio hosts must warn folks every episode their advice is for entertainment only. Teachers and mental health professionals who are required to report suspected abuse. Sundry financial professionals who must stay siloed (“sorry, can’t give you tax advice”, “no I can’t give you investment advice” etc) It is grounds for lawsuits, losing licenses, ethics boards etc to operate out of bounds, and disregard safety, and typically no question about liability for harm caused. But magical computer sage, sure please blur all the lines, throw out all previous norms and screw safety, this tool is ready for unlimited use across all domains, we’re not responsible for what you lowly human choose to do with it. And nothing magical computer sage says is our fault, you checked the box saying you agreed to user terms. (the same ubiquitous box you check and don’t read for every other annoying thing we have to do and subscribe to these days to just pay for parking and function normally in 2025.)
Confusing as hell.
The uncomfortable truth is that the legal and ethical frameworks around AI are decades behind the technology, and each tragedy exposes that gap.
And that is viewed as a feature, not a bug, by corporate and governmental stakeholders.