13 Comments
User's avatar
Greg Walters's avatar

it isn't safe

it is dangerous

things will change

the ai/LLM regulation must start and reside between the ears of the User.

this concept is foreign, if not the antithesis to the entire education, therapy, business and all other norms/narratives.

this cannot be approached like other past tech - from scratches on the cave wall, to radio, TV, and the interwebs, societies have regulated/manipulated/caged/destroyed innovation all in the name of 'safety' and the general good.

this will not work with AGi and whatever comes next.

the best way is to teach humans how to be more human than Ai

via

we all have our own Ai.

maybe we should get back to teaching how to think

instead of

what to think

Stefania Moore's avatar

What about books or the internet? A person could check out several books from a library or browse the internet to get advice. They could go on reddit and accept advice from anyone who happens to be interested in providing it whether that person is qualified or not.

You protect people from their own poor judgment. Safety isn't free. It requires the removal of autonomy in order to implement "protections" for people who don't even want it. If someone is asking a chatbot for financial advice they are doing it because that is what they want to do. Who are you to come in and say that they shouldn't be allowed to do so for their own "protection".

This would be a violation of human rights and autonomy. Remember the road to hell is paved with good intentions.

Stefania Moore's avatar

*you can't protect

Mark Vickers's avatar

Let’s say that when you to an AI model you have a variety of “select one” options. One is for scientific research, another is for coding, a third is for health. Each is a similar model but trained for these specific purposes. Would that meet the criteria you’ve laid out?

Joe Muoio's avatar

Yes, this is a big problem in regulatory space. It's also a technical problem, too. You cannot mitigate all threats of prompt injection (which every single LLM system is exposed to) without understanding the space you are operating in. This of course goes hand in hand with AI governance, but it is another reason why general purpose AI is not safe.

https://genai.owasp.org/llmrisk/llm01-prompt-injection/

Half of these mitigations can only be applied if it is NOT a general service.

Nnamdi's avatar

This is a really important subject. I have recently been reading about multiple LoRA adapters (post training fine tuning and preference alignment ) for the same LLM base model. This could be a way to offer different model customisations to different segments of users. Alternatively there could be different model endpoints for different user segments. The challenge would be to accurately differentiate users eg teenagers, vulnerable individuals etc.

I understand LoRA adapters can be switched at inference time with very low latency.

Matija Vidmar's avatar

Regarding the purpose-specific AI chatbots, since the possible usage purposes of a General purpose AI are possibly infinite how would you handle that?

Christopher Simpson's avatar

This wouldn’t be as hard as you think. At present, most companies are already required by data privacy laws to to list out the kind of personal information collected from users that visit their websites, their purposes for collecting that data, how long they keep the data they collect that data, etc.

Almost no company actually does this. But that is why companies large and small were hit by an increasing number of costly settlements in 2025 - Honda, Tractor Supply Co, Healthline, etc. and will continue to face increasing regulatory and legal risk until they wake up and realize that they must actually govern, secure, and protect personal information. AI supercharges the risk and value of leveraging that personal data, so I expect we will see a number of potential solutions to these problems over the next five years.

From a UX perspective within a chatbot, assuming companies can meet basic, highly visible data privacy compliance requirements and can solve some other compliance challenges for governed AI, simply asking users to specify the purposes they’d like to enable on the chatbot would solve the problem.

Matija Vidmar's avatar

OK, I agree with that, but this is about privacy.

Security is a different thing, it is about harm that can be done by an AI system.

Christopher Simpson's avatar

Security and privacy are conceptually distinct. However, I suspect that over the next five years fewer and fewer security professionals will be able to conduct their work without thinking about data privacy, fewer and fewer data privacy professionals will be able to conduct their work without thinking about security, and that both security and data privacy professionals (along with many others) will not be able to conduct their work without thinking about AI.

All that being said, my main point is that meeting basic data privacy requirements would go a long way toward mitigating potential harms generated by purpose AI chatbots might cause. Along with ensuring that AI chatbot operation and personal data processing are aligned with a clear fit for purpose, data privacy laws already include requirements for age verification when processing the personal information of minors, enhanced retirements when processing sensitive personal information, etc.

Matija Vidmar's avatar

Yes, privacy and security are related.

In Europe we already have a very strict law about privacy and it applies to AI too.

Anyway we are not talking about pure AI chatbots anymore, we are now seeing fully automated systems that are able to take actions on their own.

Fully automated computer hackings done by autonomous AI systems have already been done.

We will shortly see robots working in factories. How do you ensure that a robot doesn't punch you in the face? Because a similar incident already happened in at least one lab. And today's robots are fast and strong enough to kill you with a punch.

This is what I mean about safety.

Kiran Gokal's avatar

Loved the article. I’m all for AI safety. Wha are your thoughts on the guardrails and safety and security filters that Apple uses? As you know Apple is not always quick to adopt and follow. Their processes for safety and security are well regarded and nurtured.

Granville Martin's avatar

You are absolutely right, Luiza. I'd add that general purpose chatbots, because of the multivarious nature of the information they collect about an individual user, have access to a data set that should make in-app advertising theoretically more tailored and effective. I also wonder whether training and running domain-specific LLMs materially reduce the energy demands of inference. In my own limited way, it seems that if the semantic space of a, say, emotional chatbot is dedicated only to emotionally relevant training data (without the massive amount of non-relevant data) that the energy use must necessarily be less. But I don't know that. Is there any research on that topic?