Discussion about this post

User's avatar
Intent O.S.'s avatar

This is a powerful framing of the dilemma. What strikes me most is that the real challenge may not be AI itself, but the architecture of incentives surrounding it.

Every transformative technology ends up amplifying the intentions of the systems that deploy it—economic, political, or social. AI just accelerates that dynamic dramatically.

The question I keep coming back to is, what kind of infrastructure do we need so human intent remains the guiding force rather than becoming a byproduct of algorithmic optimization?

If we design systems that only optimize engagement, profit, or speed, AI will simply magnify those signals. But if we build systems that help people clarify and act on their genuine intentions, AI could become a tool for alignment rather than distortion.

In other words, the dilemma may not just be about regulating AI but about redesigning the digital environments in which human decisions are formed.

Neil Thomson's avatar

We are at the point of AI autonomous weapons, which, given the current world actors, will include at least one which allows weapons allowed to make decisions to kill.

This is very ugly.

4 more comments...

No posts

Ready for more?