💼 Lawyers Are Still Using AI Wrong
AI Policy, Compliance & Regulation Must-Reads | Edition #172
👋 Hi, Luiza Jarovsky here. Welcome to our 172nd edition, read by 52,900+ subscribers in 165+ countries. Not a subscriber yet? Join us.
🌎 We are a leading AI governance publication helping to shape the future of AI policy, compliance & regulation. It's great to have you here!
👉 A special thanks to Modulos for sponsoring this week's free edition of the newsletter. Check them out:
Organizations can accelerate their AI compliance journey by 10x with the Modulos AI Governance Platform. It unites compliance officers, data scientists, and risk managers in a single workspace, while its AI Agents automate routine tasks across key frameworks like the EU AI Act and ISO 42001. Try the Modulos AI Governance Platform for free today.
*Promote your AI governance or privacy product to 52,900+ readers: Sponsor this newsletter (next available date: April 16)
💼 Lawyers Are Still Using AI Wrong
It's 2025, yet many lawyers are still making the same mistakes when using AI tools.
Last week, lawyers representing a family in a lawsuit against Walmart and Jetson Electric Bikes admitted to using AI after the judge pointed out that nearly all the cases cited did not exist. The judge wrote:
“Plaintiffs cited nine total cases: (...) The problem with these cases is that none exist, except (...). The cases are not identifiable by their Westlaw cite, and the Court cannot locate the District of Wyoming cases by their case name in its local Electronic Court Filing System. Defendants aver through counsel that 'at least some of these mis-cited cases can be found on ChatGPT.' [ECF No. 150] (providing a picture of ChatGPT locating “Meyer v. City of Cheyenne” through the fake Westlaw identifier). Additionally, some of Plaintiffs’ language used for explaining the “Legal Standard” is peculiar. (...)”
The lawyers then answered:
“The cases cited in this Court’s order to show cause were not legitimate. Our internal AI platform 'hallucinated' the cases in question while assisting our attorney in drafting the motion in limine. This matter comes with great embarrassment and has prompted discussion and action regarding the training, implementation, and future use of artificial intelligence within our firm. This serves as a cautionary tale for our firm and all firms, as we enter this new age of AI.”
This was not the first case where lawyers cited AI-generated fake cases in a lawsuit.
Also, there are likely countless other cases where lawyers have cited fake cases without anyone noticing, so they have not been caught.
The fact is that lawyers—and many other professionals—are using AI in their work without understanding how these systems work and the professional risks involved.
Here's what every lawyer should know before using AI at work:
1. Lawyers will always be fully responsible for the legal work they perform. "Our AI system hallucinated" will never be accepted as a legal excuse. Lawyers should keep this in mind when choosing to use AI to perform any legal work for the sake of productivity, whether for reviewing, researching, drafting, or other tasks. The time a lawyer saves by, for example, using AI to draft a legal brief will likely be spent reviewing it to ensure it does not contain AI-generated errors.
2. When AI hallucinations happen—and statistically, they will—it can be very damaging for a lawyer or law firm's reputation to admit they failed to review the legal work they were paid to do, relying on AI instead. Law firms with an open and lenient AI policy are taking significant risks.
3. When a lawyer uses generative AI to perform legal work, they likely put their client's data at risk, including from cybersecurity, data protection, and intellectual property perspectives. Legal work often involves highly sensitive data, making this risk critical.
4. The legal profession is highly regulated and usually follows professional ethics guidelines. In some jurisdictions, using AI for legal work may be illegal or unethical, while in other jurisdictions, specific guidelines may allow only certain types of legal work to be performed with AI or require strict transparency/disclosure. Lawyers (and other professionals) should check which local rules apply. For example, last year, the American Bar Association issued a formal opinion on generative AI tools.
5. Currently, all generative AI applications have some rate of hallucinations, meaning their developers cannot guarantee that the outcomes will be 100% accurate or based on factual sources. On the other hand, lawyers are paid, among other things, to provide accurate legal advice grounded in evidence and factual knowledge. Lawyers who choose to use AI to perform legal work should prioritize AI applications that have specific guardrails or fine-tuning that take into account the peculiarities of legal work.
👉 If you want to learn more about AI governance, don't miss my AI Governance Training and my paid subscriber deep dives on Sundays. [*They're a great way to invest in AI literacy and upskill in AI governance, and companies often cover the costs].
🇰🇷 DeepSeek Suspended in South Korea
DeepSeek admitted to neglecting data protection laws in South Korea—all new downloads in the country are now suspended. Will they also admit to violating the GDPR? Will “confession” become a new legal loophole in AI? Here’s what everyone in AI should know:
Below are some excerpts from the latest release by South Korea's data protection authority on the topic (automatically translated from Korean):
"As a result of our own analysis, we have identified some shortcomings in communication functions and personal information processing policies with third-party service providers that have been pointed out in domestic and international media outlets.
Deepseek announced last week (2.10) that it had appointed a domestic agent, and that it had neglected to consider domestic protection laws during the process of launching its global service, and that it would actively cooperate with the Personal Information Protection Commission in the future (2.14).
The Personal Information Protection Commission judged that it would inevitably take a considerable amount of time to correct the DeepSeek service in accordance with the Protection Act, and recommended that DeepSeek temporarily suspend the service and then make improvements and supplements to prevent further concerns from spreading. DeepSeek accepted this and temporarily suspended the DeepSeek service in domestic app markets from 18:00 on Saturday, February 15."
👉 To learn more about DeepSeek's legal saga and how it's impacting the global AI governance landscape, don't miss my recent articles on the topic:
⚠️ EU AI Act: AI Literacy Requirement in Effect
Since February 2, all organizations within the scope of the EU AI Act—regardless of size, including those in the U.S.—must comply with its first requirements, which include ensuring the AI literacy of their staff (learn more here).
Yet, many organizations are still unprepared to meet this and other obligations established by the AI Act. By mastering these requirements, you can position yourself as a leader in your company's AI compliance efforts.
Our renowned AI Governance Training program provides an in-depth exploration of the EU AI Act, covering its foundations, key concepts, and main provisions, along with AI's main legal and ethical challenges, recent examples, and case studies.
The 18th cohort begins in early March, and only 5 seats remain! Secure your spot today and join over 1,100 professionals who have advanced their careers with us:
*If cost is a concern, we offer discounts for students, NGO members, and individuals in career transition. To apply, fill out this form.
🚧 International AI Safety Report
A group of 96 AI experts contributed to the 298-page International AI Safety Report chaired by Yoshua Bengio. It's a first of its kind and a must-read for everyone in AI. Some of the report's key findings:
"Further capability advancements in the coming months and years could be anything from slow to extremely rapid. Progress will depend on whether companies will be able to rapidly deploy even more data and computational power to train new models, and whether ‘scaling’ models in this way will overcome their current limitations. Recent research suggests that rapidly scaling up models may remain physically feasible for at least several years. But major capability advances may also require other factors: for example, new research breakthroughs, which are hard to predict, or the success of a novel scaling approach that companies have recently adopted."
"The pace and unpredictability of advancements in general-purpose AI pose an ‘evidence dilemma’ for policymakers. Given sometimes rapid and unexpected advancements, policymakers will often have to weigh potential benefits and risks of imminent AI advancements without having a large body of scientific evidence available. In doing so, they face a dilemma. On the one hand, pre-emptive risk mitigation measures based on limited evidence might turn out to be ineffective or unnecessary. On the other hand, waiting for stronger evidence of impending risk could leave society unprepared or even make mitigation impossible – for instance if sudden leaps in AI capabilities, and their associated risks, occur. Companies and governments are developing early warning systems and risk management frameworks that may reduce this dilemma. Some of these trigger specific mitigation measures when there is new evidence of risks, while others require developers to provide evidence of safety before releasing a new model."
"AI does not happen to us: choices made by people determine its future. The future of general-purpose AI technology is uncertain, with a wide range of trajectories appearing to be possible even in the near future, including both very positive and very negative outcomes. This uncertainty can evoke fatalism and make AI appear as something that happens to us. But it will be the decisions of societies and governments on how to navigate this uncertainty that determine which path we will take. This report aims to facilitate constructive and evidence-based discussion about these decisions."
🚀 Daily AI Governance Updates
Thousands of people receive our daily emails with educational and professional resources on AI governance, along with updates on our free live sessions and training programs:
📋 Policy Paper: Intellectual Property and AI
The OECD has published a must-read policy paper on intellectual property (IP) issues in AI trained on scraped data. I see many people struggling with basic legal concepts on IP and AI—this is a great resource to catch up:
1. If you're a lawyer, I recommend reading Chapter 3 carefully:
"The legal landscape for data scraping and growing litigation." It discusses, for example, some of the primary IP rights that may be impacted by data scraping:
copyright
database rights
trademarks
trade secrets
publicity and likeness rights
moral rights”
2. If you're not a lawyer but are interested in the AI copyright lawsuits filed by content creators, news media, and music companies, you should read Annex A, which covers copyright exceptions in different jurisdictions (starting on page 37).
Many non-lawyers have been commenting on recent lawsuits and decisions, so misinformation was expected. For example, many people have been commenting on the recent Thomson Reuters decision as a full rejection of the "fair use" exception for AI training in the U.S.—but that’s not true.
You can read more about potential exceptions in the U.S. on page 40 of the report. It states:
"The United States Copyright Act includes a fair use exception that allows for limited use of a work without the copyright holder’s consent “for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research” (17 U.S. Code §107). To determine whether the fair use exception applies in a copyright infringement lawsuit, the statue enumerates four factors for courts to consider:
the purpose and character of the use, including whether such use is of a commercial nature or is for non-profit educational purposes;
the nature of the copyrighted work;
the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and;
the effect of the use upon the potential market for or value of the copyrighted work."
🎬 The Global AI Race: Regulation and Power
Sunday was the official premiere of my much-anticipated talk with Prof. Anu Bradford (900+ people registered for the live event).
This was an extremely interesting conversation and a must-watch for everyone in AI governance. We spoke about:
1. Her new book, “Digital Empires”
2. The Brussels Effect in the context of AI regulation
3. The Brussels Effect in AI through data protection enforcement
4. China, U.S., and the EU's AI regulation strategy
5. The regulation vs. innovation debate
6. Can the three digital empires coexist in AI?
7. What will her next book be about
We analyzed recent events in the context of the "AI race" through legal, regulatory, and geopolitical lenses (in a way you won't be able to find anywhere else).
Paid subscribers can enjoy the full 57-minute recording, and free subscribers can watch a preview in the same link. Don't miss it!
⛔ Anthropic's AI Safety Warning
Dario Amodei, CEO of Anthropic, called the AI Summit in Paris was a missed opportunity for AI governance and safety, emphasizing that "time is short." He raised 3 essential issues that everyone in AI should be aware of. Read his statement:
"We were pleased to attend the AI Action Summit in Paris, and we appreciate the French government’s efforts to bring together AI companies, researchers, and policymakers from across the world. We share the goal of responsibly advancing AI for the benefit of humanity. However, greater focus and urgency is needed on several topics given the pace at which the technology is progressing. The need for democracies to keep the lead, the risks of AI, and the economic transitions that are fast approaching—these should all be central features of the next summit.
Time is short, and we must accelerate our actions to match accelerating AI progress. Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring. There are potentially greater economic, scientific, and humanitarian opportunities than for any previous technology in human history—but also serious risks to be managed.
First, we must ensure democratic societies lead in AI, and that authoritarian countries do not use it to establish global military dominance. Governing the supply chain of AI (including chips, semiconductor manufacturing equipment, and cybersecurity) is an issue that deserves much more attention—as is the judicious use of AI technology to defend free societies. (…)”
His statement continues here.
📚 AI Book Club: What Are You Reading?
Our 17th recommended book was "Chip War: The Fight for the World's Most Critical Technology," by Chris Miller.
See our previous reads and join 2,250 readers who never miss our book recommendations:
👾 On Fully Autonomous AI Agents
The paper "Fully Autonomous AI Agents Should Not be Developed" by Margaret Mitchell, Avijit Ghosh, Sasha Luccioni, and Giada Pistilli is a must-read for everyone in AI. Here's what the authors conclude:
"The history of nuclear close calls provides a sobering lesson about the risks of ceding human control to autonomous systems. For example, in 1980, computer systems falsely indicated over 2,000 Soviet missiles were heading toward North America. The error triggered emergency procedures: bomber crews rushed to their stations and command posts prepared for war. Only human cross-verification between different warning systems revealed the false alarm.
Similar incidents can be found throughout history. Such historical precedents are clearly linked to our findings of foreseeable benefits and risks.
We find no clear benefit of fully autonomous AI agents, but many foreseeable harms from ceding full human control. Looking forward, this suggests several critical directions:
1. Adoption of agent levels: Widespread adoption of clear distinctions between levels of agent autonomy. This would help developers and users better understand system capabilities and associated risks.
2. Human control mechanisms: Developing robust frameworks, both technical and policy level (Cihon, 2024) that maintain meaningful human oversight while preserving beneficial semi-autonomous functionality. This includes creating reliable override systems and establishing clear boundaries for agent operation.
3. Safety verification: Creating new methods to verify that AI agents remain within intended operating parameters and cannot override human-specified constraints.
The development of AI agents is a critical inflection point in artificial intelligence. As history demonstrates, even well-engineered autonomous systems can make catastrophic errors from trivial causes.
While increased autonomy can offer genuine benefits in specific contexts, human judgment and contextual understanding remain essential, particularly for high-stakes decisions.
The ability to access the environments an AI agent is operating in is essential, providing humans with the ability to say 'no' when a system’s autonomy drives it well away from human values and goals."
🔥 Looking for a Job in AI Governance?
Every week, we send job seekers an email alert with new job openings in privacy and AI governance. Increase your chances: explore our global job board and subscribe to our free weekly alerts:
💡 Before you go…
Thank you for reading and supporting my work! If you enjoyed this edition, here's what you can do next:
→ Keep the conversation going
Start a discussion on social media about this edition's topic;
Share this edition with friends, adding your critical perspective;
Looking for an authentic gift? Surprise them with a paid subscription.
→ For organizations
Teams promoting AI literacy can purchase 3+ subscriptions at a discount here or secure 3+ seats in our live online AI Governance Training at a reduced rate here;
Companies offering AI governance or privacy products can reach thousands of readers by sponsoring this newsletter. Get started here.
👋 Have a great day, and see you soon!
Luiza