Authorities should require AI companies to 1) limit the purposes of their AI systems; 2) respect sector-specific rules; 3) be held accountable for the harm their AI systems cause | Edition #267
Yes, this is a big problem in regulatory space. It's also a technical problem, too. You cannot mitigate all threats of prompt injection (which every single LLM system is exposed to) without understanding the space you are operating in. This of course goes hand in hand with AI governance, but it is another reason why general purpose AI is not safe.
This is a really important subject. I have recently been reading about multiple LoRA adapters (post training fine tuning and preference alignment ) for the same LLM base model. This could be a way to offer different model customisations to different segments of users. Alternatively there could be different model endpoints for different user segments. The challenge would be to accurately differentiate users eg teenagers, vulnerable individuals etc.
I understand LoRA adapters can be switched at inference time with very low latency.
This wouldn’t be as hard as you think. At present, most companies are already required by data privacy laws to to list out the kind of personal information collected from users that visit their websites, their purposes for collecting that data, how long they keep the data they collect that data, etc.
Almost no company actually does this. But that is why companies large and small were hit by an increasing number of costly settlements in 2025 - Honda, Tractor Supply Co, Healthline, etc. and will continue to face increasing regulatory and legal risk until they wake up and realize that they must actually govern, secure, and protect personal information. AI supercharges the risk and value of leveraging that personal data, so I expect we will see a number of potential solutions to these problems over the next five years.
From a UX perspective within a chatbot, assuming companies can meet basic, highly visible data privacy compliance requirements and can solve some other compliance challenges for governed AI, simply asking users to specify the purposes they’d like to enable on the chatbot would solve the problem.
Security and privacy are conceptually distinct. However, I suspect that over the next five years fewer and fewer security professionals will be able to conduct their work without thinking about data privacy, fewer and fewer data privacy professionals will be able to conduct their work without thinking about security, and that both security and data privacy professionals (along with many others) will not be able to conduct their work without thinking about AI.
All that being said, my main point is that meeting basic data privacy requirements would go a long way toward mitigating potential harms generated by purpose AI chatbots might cause. Along with ensuring that AI chatbot operation and personal data processing are aligned with a clear fit for purpose, data privacy laws already include requirements for age verification when processing the personal information of minors, enhanced retirements when processing sensitive personal information, etc.
In Europe we already have a very strict law about privacy and it applies to AI too.
Anyway we are not talking about pure AI chatbots anymore, we are now seeing fully automated systems that are able to take actions on their own.
Fully automated computer hackings done by autonomous AI systems have already been done.
We will shortly see robots working in factories. How do you ensure that a robot doesn't punch you in the face? Because a similar incident already happened in at least one lab. And today's robots are fast and strong enough to kill you with a punch.
Loved the article. I’m all for AI safety. Wha are your thoughts on the guardrails and safety and security filters that Apple uses? As you know Apple is not always quick to adopt and follow. Their processes for safety and security are well regarded and nurtured.
Yes, this is a big problem in regulatory space. It's also a technical problem, too. You cannot mitigate all threats of prompt injection (which every single LLM system is exposed to) without understanding the space you are operating in. This of course goes hand in hand with AI governance, but it is another reason why general purpose AI is not safe.
https://genai.owasp.org/llmrisk/llm01-prompt-injection/
Half of these mitigations can only be applied if it is NOT a general service.
This is a really important subject. I have recently been reading about multiple LoRA adapters (post training fine tuning and preference alignment ) for the same LLM base model. This could be a way to offer different model customisations to different segments of users. Alternatively there could be different model endpoints for different user segments. The challenge would be to accurately differentiate users eg teenagers, vulnerable individuals etc.
I understand LoRA adapters can be switched at inference time with very low latency.
Regarding the purpose-specific AI chatbots, since the possible usage purposes of a General purpose AI are possibly infinite how would you handle that?
This wouldn’t be as hard as you think. At present, most companies are already required by data privacy laws to to list out the kind of personal information collected from users that visit their websites, their purposes for collecting that data, how long they keep the data they collect that data, etc.
Almost no company actually does this. But that is why companies large and small were hit by an increasing number of costly settlements in 2025 - Honda, Tractor Supply Co, Healthline, etc. and will continue to face increasing regulatory and legal risk until they wake up and realize that they must actually govern, secure, and protect personal information. AI supercharges the risk and value of leveraging that personal data, so I expect we will see a number of potential solutions to these problems over the next five years.
From a UX perspective within a chatbot, assuming companies can meet basic, highly visible data privacy compliance requirements and can solve some other compliance challenges for governed AI, simply asking users to specify the purposes they’d like to enable on the chatbot would solve the problem.
OK, I agree with that, but this is about privacy.
Security is a different thing, it is about harm that can be done by an AI system.
Security and privacy are conceptually distinct. However, I suspect that over the next five years fewer and fewer security professionals will be able to conduct their work without thinking about data privacy, fewer and fewer data privacy professionals will be able to conduct their work without thinking about security, and that both security and data privacy professionals (along with many others) will not be able to conduct their work without thinking about AI.
All that being said, my main point is that meeting basic data privacy requirements would go a long way toward mitigating potential harms generated by purpose AI chatbots might cause. Along with ensuring that AI chatbot operation and personal data processing are aligned with a clear fit for purpose, data privacy laws already include requirements for age verification when processing the personal information of minors, enhanced retirements when processing sensitive personal information, etc.
Yes, privacy and security are related.
In Europe we already have a very strict law about privacy and it applies to AI too.
Anyway we are not talking about pure AI chatbots anymore, we are now seeing fully automated systems that are able to take actions on their own.
Fully automated computer hackings done by autonomous AI systems have already been done.
We will shortly see robots working in factories. How do you ensure that a robot doesn't punch you in the face? Because a similar incident already happened in at least one lab. And today's robots are fast and strong enough to kill you with a punch.
This is what I mean about safety.
Loved the article. I’m all for AI safety. Wha are your thoughts on the guardrails and safety and security filters that Apple uses? As you know Apple is not always quick to adopt and follow. Their processes for safety and security are well regarded and nurtured.