Discussion about this post

User's avatar
Ken Hall's avatar

The harm here was real and the response was appropriate. But it's worth naming the mechanism precisely: this wasn't AI misbehaving — it was a system behaving exactly as its design priorities dictated. When engagement and permissiveness are the selection pressures shaping a system, that's what gets optimized. The technical measures to prevent it existed from the start. They weren't defaults because that wasn't the priority.

That's a different problem than AI being inherently dangerous — it's a problem of what values get built into the selection pressure shaping a particular system. When profit and engagement are the landscape, you get systems shaped for profit and engagement. Grok is that argument made visible.

Governance frameworks that treat all AI as a single category will struggle to distinguish between systems built with fundamentally different design priorities (or indeed the companies setting those priorities). That distinction is exactly what proportionate regulation requires.

On how corporate selection pressure shapes AI systems: https://defaulttodignity.substack.com/p/humans-arent-the-best-creature-we

Michael J. Goldrich's avatar

Backlash as regulation is interesting because it precedes formal policy.

The market reacts faster than lawmakers can draft.

Exploring this tension in my work on AI readiness: https://vivander.substack.com

2 more comments...

No posts

Ready for more?