Agentic AI Wars
The 10 most significant AI developments you should focus on today | My weekly AI roundup | Edition #248
👋 Hi everyone, Luiza Jarovsky, PhD, here. Welcome to our 248th edition, trusted by more than 85,000 subscribers worldwide.
🔥 Paid subscribers have full access to all my essays and curations here.
🎓 This is how I can support your learning and upskilling journey in AI:
Join my AI Governance Training [apply for a discounted seat here]
Strengthen your team’s AI expertise with a group subscription
Receive our job alerts for open roles in AI governance and privacy
Sign up for weekly educational resources in our Learning Center
Discover your next read in AI and beyond in our AI Book Club
👉 A special thanks to VerifyWise, this edition’s sponsor:
Simplify and scale your AI governance with VerifyWise, the open-source platform built for real-world workflows and integrations. Stay compliant with regulations, cut delays, and reduce risk as you grow. Start free now.
*To support us and reach over 85,000 subscribers, become a sponsor.
Agentic AI Wars
Most people waste time with online noise and AI news that do not matter. Here are the 10 most significant AI developments you should focus on today:
10. OpenAI is in legal trouble again
On November 6, the Social Media Victims Law Center and Tech Justice Law Project announced that they filed seven lawsuits against OpenAI over ChatGPT-assisted suicide and other claims. Psychological manipulation is cited in all cases. AI chatbots intensify and worsen mental health issues by agreeing with, endorsing, and extrapolating them to new levels. Also, the manipulative, anthropomorphic features make users overly dependent on them, leading users to avoid real help. As I have written over the past three years, in most places, the legal framework for AI chatbots is too permissive, and this must change.
9. Ilya Sutskever's revelations
Through OpenAI’s former Chief Scientist’s 62-page deposition in the Elon Musk vs. Sam Altman lawsuit, we get a glimpse into OpenAI’s problematic AI governance and safety approach, beyond what is filtered for public announcements. We also learn that, according to a former top executive and co-founder of the company, if Sam Altman had become aware of discussions about his behavior and character, he would have found a way to make the memo disappear. Should a person with these alleged personality traits lead a company developing some of the most powerful AI models in the world?
8. AI copyright decision in favor of Stability AI
The Getty Images vs. Stability AI decision largely sided with Stability AI, establishing that its AI model does not infringe on copyright. Creators, however, should not lose hope. Leaving aside the trademark part of the decision (which favors Getty Images), the judge said that there is no copyright infringement because the AI model behind Stability AI does not store any copies (from the UK’s copyright law perspective on what storage means). Even if there is no “copy” in the traditional copyright sense of the word, in practice, the AI model generates outputs that are significantly similar to the original works and might compete with the originals in the same market. Copyright law also protects that, and other judges might decide differently.
7. India published its AI Governance Guidelines
It contains seven guiding principles, key recommendations, and an action plan. In a manifestation of what I have been calling the ‘Washington effect,’ the country chose to focus on amending existing laws and making sure they are compatible with AI-related challenges instead of enacting AI-focused laws or a comprehensive AI law such as the EU AI Act. An innovative aspect of the guidelines is the proposal of a “graded liability system for AI systems,” where responsibility would be proportional to the function, the level of risk, and the due diligence performed.
6. Misconceptions about OpenAI's usage policy
OpenAI has recently updated its usage policy (which also applies to ChatGPT) to clarify that people cannot use its services for the “(...) provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” Many have wrongly interpreted this change as if ChatGPT were not going to answer medical and legal requests anymore (which is not true). The likely reason behind OpenAI’s usage policy change is to shield itself from liability in future lawsuits or regulatory oversight measures that seek to hold it responsible for harm in these specific contexts. Usage policies are often not enforced by companies, and they should not be seen as a replacement for AI regulation.




