Discussion about this post

User's avatar
Greg Walters's avatar

it isn't safe

it is dangerous

things will change

the ai/LLM regulation must start and reside between the ears of the User.

this concept is foreign, if not the antithesis to the entire education, therapy, business and all other norms/narratives.

this cannot be approached like other past tech - from scratches on the cave wall, to radio, TV, and the interwebs, societies have regulated/manipulated/caged/destroyed innovation all in the name of 'safety' and the general good.

this will not work with AGi and whatever comes next.

the best way is to teach humans how to be more human than Ai

via

we all have our own Ai.

maybe we should get back to teaching how to think

instead of

what to think

Stefania Moore's avatar

What about books or the internet? A person could check out several books from a library or browse the internet to get advice. They could go on reddit and accept advice from anyone who happens to be interested in providing it whether that person is qualified or not.

You protect people from their own poor judgment. Safety isn't free. It requires the removal of autonomy in order to implement "protections" for people who don't even want it. If someone is asking a chatbot for financial advice they are doing it because that is what they want to do. Who are you to come in and say that they shouldn't be allowed to do so for their own "protection".

This would be a violation of human rights and autonomy. Remember the road to hell is paved with good intentions.

11 more comments...

No posts

Ready for more?