Discussion about this post

User's avatar
Joe Muoio's avatar

Yes, this is a big problem in regulatory space. It's also a technical problem, too. You cannot mitigate all threats of prompt injection (which every single LLM system is exposed to) without understanding the space you are operating in. This of course goes hand in hand with AI governance, but it is another reason why general purpose AI is not safe.

https://genai.owasp.org/llmrisk/llm01-prompt-injection/

Half of these mitigations can only be applied if it is NOT a general service.

Nnamdi's avatar

This is a really important subject. I have recently been reading about multiple LoRA adapters (post training fine tuning and preference alignment ) for the same LLM base model. This could be a way to offer different model customisations to different segments of users. Alternatively there could be different model endpoints for different user segments. The challenge would be to accurately differentiate users eg teenagers, vulnerable individuals etc.

I understand LoRA adapters can be switched at inference time with very low latency.

6 more comments...

No posts

Ready for more?