👋 Hi, Luiza Jarovsky here. Welcome to the 119th edition of this newsletter on AI policy & regulation, read by 31,000+ subscribers in 145+ countries. I hope you enjoy reading it as much as I enjoy writing it.
💎 In this week's AI governance pro's edition of the newsletter - 🎸AI music vs. copyright: latest updates - I discuss AI training vs. fair use in the context of AI music and why recent lawsuits might end up favorable for AI companies. Paid subscribers received it yesterday and can read it here. If you are an AI governance pro, upgrade to paid and get immediate access to my analyses unpacking AI compliance & regulation.
👉 A special thanks to MineOS for sponsoring this week's free edition of the newsletter. Read their article:
The data privacy industry has seen numerous significant developments in 2024, making it challenging to keep track of them all. These include the intricacies of new state privacy laws in the US, the ongoing struggle to pass federal privacy legislation, updates on data protection enforcement in the EU, and discussions on where AI governance fits into the broader picture. Mid-summer is a great time to take stock of the year so far. See the biggest data privacy lessons of the year in this article from MineOS.
🏛️ Lawyers & generative AI: ethical issues
The American Bar Association issued a formal opinion on the ethical use of generative AI by lawyers, and it's an essential read for lawyers and AI governance professionals. Quotes & comments:
"To competently use a GAI tool in a client representation, lawyers need not become GAI experts. Rather, lawyers must have a reasonable understanding of the capabilities and limitations of the specific GAI technology that the lawyer might use. This means that lawyers should either acquire a reasonable understanding of the benefits and risks of the GAI tools that they employ in their practices or draw on the expertise of others who can provide guidance about the relevant GAI tool’s capabilities and limitations. This is not a static undertaking. Given the fast-paced evolution of GAI tools, technological competence presupposes that lawyers remain vigilant about the tools’ benefits and risks. Although there is no single right way to keep up with GAI developments, lawyers should consider reading about GAI tools targeted at the legal profession, attending relevant continuing legal education programs, and, as noted above, consulting others who are proficient in GAI technology" (pages 2-3)
"Before lawyers input information relating to the representation of a client into a GAI tool, they must evaluate the risks that the information will be disclosed to or accessed by others outside the firm. Lawyers must also evaluate the risk that the information will be disclosed to or accessed by others inside the firm who will not adequately protect the information from improper disclosure or use because, for example, they are unaware of the source of the information and that it originated with a client of the firm. Because GAI tools now available differ in their ability to ensure that information relating to the representation is protected from impermissible disclosure and access, this risk analysis will be fact-driven and depend on the client, the matter, the task, and the GAI tool used to perform it." (page 6)
"Of course, lawyers must disclose their GAI practices if asked by a client how they conducted their work, or whether GAI technologies were employed in doing so, or if the client expressly requires disclosure under the terms of the engagement agreement or the client’s outside counsel guidelines. There are also situations where Model Rule 1.4 requires lawyers to discuss their use of GAI tools unprompted by the client. For example, as discussed in the previous section, clients would need to be informed in advance, and to give informed consent, if the lawyer proposes to input information relating to the representation into the GAI tool.41 Lawyers must also consult clients when the use of a GAI tool is relevant to the basis or reasonableness of a lawyer’s fee" (page 8)
"Lawyers using GAI tools have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature of GAI. In using GAI tools, lawyers also have other relevant ethical duties, such as those relating to confidentiality, communication with a client, meritorious claims and contentions, candor toward the tribunal, supervisory responsibilities regarding others in the law office using the technology and those outside the law office providing GAI services, and charging reasonable fees. With the ever-evolving use of technology by lawyers and courts, lawyers must be vigilant in complying with the Rules of Professional Conduct to ensure that lawyers are adhering to their ethical responsibilities and that clients are protected." (pages 14-15)
➡️ As the American Bar Association's opinion makes clear, AI literacy is essential, especially for lawyers. In the EU AI Act, for example, AI literacy is a legal obligation - see Article 4. (Contact us if you'd like to develop a training program for your team.)
➡️ In my opinion, most lawyers are unprepared at this point and lack basic knowledge of how Generative AI works in practice, how AI is trained, limitations, risks, and so on. Using these tools in a professional capacity could be detrimental to clients.
➡️ It's not a coincidence that some of the first Generative AI scandals involved lawyers using ChatGPT to write legal briefs (they used ChatGPT-generated fake cases and were fined).
➡️ The use of general-purpose AI systems or any AI system not built with specific guardrails for lawyers should not be incentivized. The use of AI tools built for lawyers should be done with extreme care and under supervision.
➡️ Read the American Bar Association's formal opinion here.
➡️ If you are looking for AI training programs, make sure to check out our 4-week AI Bootcamps. They focus on AI risk, harms, compliance, and regulation and were attended by 850+ participants. If you are a lawyer or compliance professional, you can't miss them. Save your spot here.
🇳🇬 Nigeria's “National AI Strategy”
Nigeria released its "National AI Strategy," and it's a must-read for everyone in AI governance. Quotes:
"Nigeria and the broader African continent possess some of the most distinctive and compelling challenges and opportunities that AI could address. From optimising agriculture in diverse climates to improving public health infrastructure, locally developed AI solutions, adapted to local realities, are far better equipped to solve these challenges than externally imposed models created for an entirely different context and people. Therefore, developing a homegrown AI strategy that provides Nigeria with a clear roadmap for AI application will catalyse relevant innovation and aid in rebalancing power structures. This presents a massive opportunity for Nigeria to play as the leader of AI in Africa (...)" (page 11)
"Formulating effective strategies for fostering a thriving AI ecosystem in Nigeria necessitates a critical and introspective approach. Understanding the interplay between internal strengths, like a youthful population and budding technology sector, and external factors, such as global trends and market demands, is paramount. By identifying these elements, policymakers, investors, and entrepreneurs can leverage Nigeria's unique potential in the international AI landscape. Recognising internal weaknesses, such as infrastructure deficiencies and lacking skilled personnel, allows for targeted interventions to bridge these gaps. Capitalising on emerging opportunities presented by the AI revolution can unlock significant economic growth and societal benefits for Nigeria.(...)" (page 14)
"Access to quality data is fundamental to developing robust and reliable AI systems. Unfortunately, Nigeria faces significant challenges with data across collection, quality, availability, and accessibility. On data collection, a 2020 report by the World Bank titled "Nigeria Digital Economy Diagnostic" (22), revealed that Nigeria has a low data collection rate. This implies a need for more data in various sectors, hindering the development of AI models that could address critical issues in significant sectors. Additionally, the quality of available data is another crucial issue. Many datasets in Nigeria suffer from inaccuracies, incompleteness, and a lack of standardisation. This data quality needs to be improved to ensure the reliability and effectiveness of AI algorithms, which require clean and accurate data to function optimally. Even when data is available, they must be more cohesive and consistent." (page 36)
➡️ Read the full document here.
⚖️ US judge: Google has an illegal monopoly
A federal US judge ruled that Google has an illegal monopoly over internet search. This is huge! Important quotes from the 286-page decision:
"Google’s dominance has gone unchallenged for well over a decade. In 2009, 80% of all search queries in the United States already went through Google. That number has only grown. By 2020, it was nearly 90%, and even higher on mobile devices at almost 95%. The second-place search engine, Microsoft’s Bing, sees roughly 6% of all search queries—84% fewer than Google."
"But Google also has a major, largely unseen advantage over its rivals: default distribution. Most users access a general search engine through a browser (like Apple’s Safari) or a search widget that comes preloaded on a mobile device. Those search access points are preset with a “default” search engine. The default is extremely valuable real estate. Because many users simply stick to searching with the default, Google receives billions of queries every day through those access points. Google derives extraordinary volumes of user data from such searches. It then uses that information to improve search quality. Google so values such data that, absent a user-initiated change, it stores 18 months-worth of a user’s search history and activity."
"Google pays huge sums to secure these preloaded defaults. Usually, the amount is calculated as a percentage of the advertising revenue that Google generates from queries run through the default search access points. This is known as “revenue share.” In 2021, those payments totaled more than $26 billion. That is nearly four times more than all of Google’s other search-specific costs combined. In exchange for revenue share, Google not only receives default placement at the key search access points, but its partners also agree not to preload any other general search engine on the device. Thus, most devices in the United States come preloaded exclusively with Google. These distribution deals have forced Google’s rivals to find other ways to reach users."
"For the foregoing reasons, the court concludes that Google has violated Section 2 of the Sherman Act by maintaining its monopoly in two product markets in the United States—general search services and general text advertising—through its exclusive distribution agreements."
➡️ Read the full 286-page decision here.
📋 AI Act: what are codes of practice?
The AI Office has recently opened a call to participate in drafting the 1st general-purpose AI Code of Practice. But what are Codes of Practice? Are they the same as Codes of Conduct? Read this:
➡️ According to Article 56(2) of the EU AI Act:
"The AI Office and the Board shall aim to ensure that the codes of practice cover at least the obligations provided for in Articles 53 [*Obligations for Providers of General-Purpose AI Models] and 55 [*Obligations for Providers of General-Purpose AI Models with Systemic Risk], including the following issues:
➵ (a) the means to ensure that the information referred to in Article 53(1), points (a) and (b), is kept up to date in light of market and technological developments;
➵ (b) the adequate level of detail for the summary about the content used for training;
➵ (c) the identification of the type and nature of the systemic risks at Union level, including their sources, where appropriate;
➵ (d) the measures, procedures and modalities for the assessment and management of the systemic risks at Union level, including the documentation thereof, which shall be proportionate to the risks, take into consideration their severity and probability and take into account the specific challenges of tackling those risks in light of the possible ways in which such risks may emerge and materialise along the AI value chain."
➡️ According to the AI Office's publication:
"The Code will be prepared in an iterative drafting process by April 2025, 9 months from the AI Act’s entry into force on 1 August 2024. The Code will facilitate the proper application of the rules of the AI Act for general-purpose AI models."
➡️ A reminder that Codes of Practice, according to the AI Act, are not the same as Codes of Conduct. The latter are regulated in Article 95 and are:
"(...) intended to foster the voluntary application to AI systems, other than high-risk AI systems, of some or all of the requirements set out in Chapter III, Section 2 [*Requirements for High-Risk AI Systems], taking into account the available technical solutions and industry best practices allowing for the application of such requirements."
➡️ This is a great opportunity to participate in the AI regulation process - in practice. If you are interested in helping draft this first code of practice for General-Purpose AI models, you can express your interest by August 25, 9am PT.
➡️ To upskill and advance your career in AI governance & regulation, join the 4th cohort of our EU AI Act Bootcamp and the 10th cohort of our Bootcamp on Emerging Challenges in AI, Tech & Privacy in September. Save your spot here.
🇦🇷 Argentina vs. Meta
Facundo Malaureille Peltzer & Daniel Monastersky filed a complaint with the Argentinian Data Protection Authority due to Meta's AI training practices. Important information:
➡️ The complaint questions Meta's data protection practices, specifically regarding the use of personal information from its users to train its AI systems. According to one of the authors:
“This complaint seeks to establish a legal precedent that will guide future regulations and practices in the field of AI and data protection in our country.”
➡️ Argentina now joins the EU, Brazil, and Nigeria in actively questioning Meta's data practices. Some of these complaints focus on Meta's use of personal data to train AI.
➡️ Read more about recent complaints against Meta in my newsletter article on the topic.
🎤 Are you looking for a speaker in AI, tech & privacy?
I would welcome the opportunity to:
➵ Give a talk at your company;
➵ Speak at your event;
➵ Coordinate a private AI Bootcamp for your team (15+ people).
📄 AI paper alert
The paper "Brave New World? Human Welfare and Paternalistic AI" by Cass Sunstein is a great read for everyone interested in AI, public policy & behavioral economics. Quotes:
"Interventions designed to influence people’s choices may or may not increase social welfare. A recurring problem is that of heterogeneity. People have different needs, preferences, and values, and an intervention that affects a large population might help some and hurt others. The average treatment effect is not the same as the welfare effect. The welfare effect is what matters. How can it be improved? AI, focused on improving social welfare, can provide an answer, at least if it is focused on (1) harms that people do to their future selves and (2) harms that people do to others." (page 4)
"With the risks in mind, the same kinds of consumer protection measures that have long been in place in various nations should be updated and adapted to the context of AI. For law, these measures have a degree of urgency. In addition, the same kind of guardrails that have been suggested for retirement plans might be applied to Choice Engines of multiple kinds, including those involving motor vehicles and appliances. Restrictions on the equivalent of “dominated options,” for example, might be imposed by law, so long as it is clear what is dominated. Restrictions on shrouded attributes, including hidden fees, might be similarly justified. Choice Engines, powered by AI, have considerable potential to improve consumer welfare and also to reduce externalities, but without regulation, we have reason to question whether they will always or generally do that. Those who design Choice Engines may or may not count as fiduciaries, but at a minimum, it makes sense to scrutinize all forms of choice architecture for deception and manipulation, broadly understood." (page 29)
"The principal theme of behavioral economics, and behavioral law and economics, is not that people are stupid. It is that life is hard. Behaviorally informed law and policy does not start from the premise that people are “irrational.” Calling people irrational is not very nice, and it is also false. It is nicer, and more accurate, to say that we sometimes lack important information, and also that we suffer from identifiable biases. Those who seek to help us may also lack important information, and they might also suffer from identifiable biases. Even worse, they might not be trying to help us. In markets, AI provides unprecedented opportunities for targeting informational deficits and behavioral biases." (pages 29-30)
➡️ Read the full paper here.
🔥 AI Governance is HIRING
Below are 10 AI Governance positions posted in the last few days. Bookmark, share & be an early applicant:
1. The Future Society (US): Director, U.S. AI Governance - apply
2. GULP – experts united (Germany): AI Governance Lead - apply
3. Warner Bros. Discovery (US): Director, AI Governance - apply
4. AstraZeneca (Spain): Enterprise AI Governance Associate - apply
5. Swiss Re (Slovakia): Data & AI Governance Architect - apply
6. Upstate Medical University (US): AI Governance Lead - apply
7. Sutter Health (US): Director, Data and AI Governance - apply
8. Trace3 (US): Sr. Consultant, AI Governance Risk - apply
9. M&T Bank (US): AI Governance Senior Consultant - apply
10. CGI (US): AI Governance Specialist - apply
➡️ For more AI governance and privacy job opportunities, subscribe to our weekly job alert. Good luck!
🚀 Upskill & advance your AI governance career
Now is a great time to learn more about AI and invest in your professional growth in this field. Save your spot in the September cohorts of our AI Bootcamps:
1. Emerging Challenges in AI, Tech & Privacy (4 weeks)
🗓️ Tuesdays, September 3 to 24 - learn more
2. The EU AI Act Bootcamp (4 weeks)
🗓️ Wednesdays, September 4 to 25 - learn more
More than 850 people have participated in our training programs - don't miss them!
🙏 Thank you for reading!
If you have comments on this week's edition, write to me, and I'll get back to you soon.
All the best, Luiza