👋 Hi, Luiza Jarovsky here. Welcome to the 96th edition of this newsletter, read by 20,550+ email subscribers in 120+ countries. I hope you enjoy reading it as much as I enjoy writing it.
A special thanks to hoggo, this week's sponsor. Check them out:
Assessing a company’s privacy and security practices can be time-consuming and frustrating. Luckily, hoggo gives you all the information you need with a single click, so you can quickly assess vendors according to industry standards and see which vendors pose higher privacy risks than others and why. hoggo also automatically monitors your vendors every day to let you know if there are any changes in policies, data breaches, or sub-processors. See it for yourself, it’s free.
🌎 UN adopts first global resolution on AI
The UN General Assembly adopted the first global resolution on AI. This is what you need to know:
➡️ The first important aspect of this resolution is that the UN recognizes that AI can help accelerate the achievement of the 17 Sustainable Development Goals, and it stresses the urgency of "achieving global consensus on safe, secure, and trustworthy artificial intelligence systems."
➡️ Below are 17 Sustainable Development Goals:
1. No poverty
2. Zero hunger
3. Good health and well-being
4. Quality Education
5. Gender equality
6. Clean water and sanitation
7. Affordable and clean energy
8. Decent work and economic growth
9. Industry, innovation and infrastructure
10. Reduced inequalities
11. Sustainable cities and economies
12. Responsible consumption and production
13. Climate action
14. Life below water
15. Life on land
16. Peace, justice, and strong institutions
17. Partnership for the goals
➡️ On page 3 of the resolution, the UN recognizes possible downsides of the use of AI "without adequate safeguards or in a manner inconsistent with international law," for example:
➵ "hinder progress towards the achievement of the 2030 Agenda for Sustainable Development and its Sustainable Development Goals and undermine sustainable development in its three dimensions – economic, social, and environmental;
➵ widen digital divides between and within countries;
➵ reinforce structural inequalities and biases;
➵ lead to discrimination;
➵ undermine information integrity and access to information;
➵ undercut the protection, promotion, and enjoyment of human rights and fundamental freedoms, including the right not to be subject to unlawful or arbitrary interference with one’s privacy;
➵ increase the potential risk for accidents and compound threats from malicious actors"
➡️ It also encourages Member States to enact AI regulation and AI policies on various topics, including privacy. They mention:
"Safeguarding privacy and the protection of personal data when testing and evaluating systems, and for transparency and reporting requirements in compliance with applicable international, national and subnational legal frameworks, including on the use of personal data throughout the life cycle of artificial intelligence systems"
➡️ Overall, it's an interesting summary of a global perspective on the meaning of “safe, secure, and trustworthy” AI.
🇪🇺 The EU Commission opened non-compliance investigations against Alphabet, Apple & Meta
The EU Commission opened non-compliance investigations against Alphabet, Apple, and Meta under the Digital Markets Act (DMA), and the fines could be very high. Here's what you need to know:
➡️ Under the DMA, Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft were classified as "gatekeepers" and had to fully comply with all DMA obligations by 7 March 2024.
➡️ The EU Commission assessed the compliance reports setting out gatekeepers' compliance measures, and gathered stakeholder feedback. They decided to investigate Alphabet, Apple, and Meta for the following reasons:
➵ Alphabet's and Apple's steering rules (Article 5(4) of the DMA requires gatekeepers to allow app developers to “steer” consumers to offers outside the gatekeepers' app stores, free of charge)
➵ Alphabet's measures to prevent self-preferencing (Article 6(5) of the DMA says that the gatekeeper "shall not treat more favorably, in ranking and related indexing and crawling, services and products offered by the gatekeeper itself than similar services or products of a third party)
➵ Apple's compliance with user choice obligations (Art 6(3) of the DMA says that "the gatekeeper shall allow and technically enable end users to easily un-install any software applications on the operating system of the gatekeeper (...)")
➵ Meta's “pay or consent” model (The EU Commission wants to understand "whether the recently introduced pay or consent model for users in the EU complies with Article 5(2) of the DMA, which requires gatekeepers to obtain consent from users when they intend to combine or cross-use their personal data across different core platform services.")
➡️ The EU Commission plans to conclude the investigations in 12 months.
➡️ In case of an infringement, the EU Commission can impose fines of up to 10% of the company's total worldwide turnover.
➡️ Fines can go up to 20% in case of repeated infringement.
➡️ In case of systematic infringements, the EU Commission may also order a gatekeeper to sell a business or parts of it or ban the gatekeeper from acquisitions of additional services related to systemic non-compliance.
The EU is acting fast - again.
⚖️ Generative AI and legal implications
Given the extreme hype around generative AI, most people have not realized the extent to which it can get them into legal trouble. Some examples:
➵ If you say you are using generative AI to 'embellish' your product when you are actually not using it, you might be violating consumer protection laws, sectoral laws, professional conduct rules, and more.
➵ If you say you don't use generative AI to hide your 'productivity hack' when you are actually using it, it's unethical, and you might be violating sectoral laws, professional conduct rules, and more.
➵ If you don't disclose that you used generative AI in order to mislead one or more people, you might be violating criminal law, civil law, contract law, consumer law, sectoral laws, and more.
➵ If you use generative AI in a context where you shouldn't be using it, you might be violating corporate law, professional conduct, internal rules, and more.
➵ If you don't double-check generative AI's output before using it, you might be violating privacy law, copyright law, consumer law, sectoral laws, professional conduct rules, and more.
➡️ Best choices:
➵ Check if using generative AI is advisable/permitted, especially in a professional context;
➵ When using generative AI, be transparent about it;
➵ When using generative AI, always double-check the outcome before using it;
➵ Never use generative AI as a tool to trick or mislead people.
💰 French Competition Authority imposes a 250 million Euro fine against Google
The French Competition Authority imposes a 250 million Euro fine against Google for not complying with previous commitments & using news articles to train its AI system (Bard/Gemini). Here are some quotes and comments:
"The investigation revealed that Google used content from the domains of press publishers and news agencies at the stage of training the founding model of its artificial intelligence service (...) and the display of the answers to the user without either the publishers and press agencies or the Authority having been informed of these uses."
"The Authority considers, at the very least, that by failing to inform publishers of the use of their content for their Bard software, Google has breached commitment no. 1."
"Furthermore, Google has not offered (...) a technical solution allowing publishers and press agencies to oppose the use by Bard of their content without affecting the display of this content on other Google services. Indeed, until this date, publishers and press agencies wishing to oppose this use had to insert an instruction opposing any indexing of their content by Google, including on the Search, Discover, and Google News services, which were precisely the subject of a negotiation for the remuneration of neighboring rights"
-
➡️ This enforcement action highlights important topics in the context of AI training and generative AI:
➵ the use of news articles and press publishers' data to train AI systems (if there are any rights or obligations associated with it);
➵ the need to offer an opt-out solution to publishers that do not want to have their data used to train AI systems;
➵ the fact that this opt-out solution should not affect the display of content protected under related rights on other services of the same tech company (and should not hinder publishers' and press agencies' ability to negotiate compensation).
➡️ AI training and the correlated rules and obligations that tech companies must follow are being questioned and litigated in many fields of law, including competition, privacy, copyright, and consumer protection.
💵 The SEC charges investment advisers with making false & misleading statements about their use of AI
Here's what you need to know:
➵ According to the US Securities and Exchange Commission (SEC), "investment advisers should not mislead the public by saying they are using an AI model when they are not. Such AI washing hurts investors.”
➵ In this specific case, one of the investment advisers, Delphia, claimed that it “put[s] collective data to work to make our artificial intelligence smarter so it can predict which companies and trends are about to make it big and invest in them before everyone else.” When, in fact, they did not have the AI capabilities they claimed.
➵ Regarding the other investment adviser, Global Predictions, in 2023, they claimed to be the “first regulated AI financial advisor” and that its platform provided “expert AI-driven forecasts,” when those claims were actually false.
➵ Delphia agreed to pay a civil penalty of $225,000, and Global Predictions agreed to pay a civil penalty of $175,000.
➵ These important enforcement actions show that:
- the AI harm spectrum is broad and can occur in the context of various legal fields and under the jurisdiction of various regulatory authorities; and
- the SEC is another authority paying close attention.
🎬 YouTube requires disclosure of ‘realistic’ AI content
YouTube now requires creators to disclose when 'realistic' content is made using generative AI. What you need to know:
➡️ It only applies to realistic content, meaning content a viewer could mistake for a real person, place, etc.
➡️ If generative AI was used for productivity, such as generating scripts, content ideas, or automatic captions, disclosure is not required;
➡️ Additional types of modifications that don't require disclosure: clearly unrealistic content, special effects, beauty filters, and visual enhancements.
➡️ A label will appear in the video's expanded description and, for more sensitive topics, on the video itself.
➡️ In some cases, YouTube will add a label even when the creator hasn't disclosed it (e.g., if the AI content has the potential to confuse or mislead people)
➡️ YouTube says they are working on a privacy process for people to request the removal of AI-generated content that simulates an identifiable individual (e.g., voice or face).
Given that many thousands of new videos are uploaded every minute on YouTube, these new requirements will likely impact millions of users around the world.
Still, given the potential harm associated with AI deepfakes, self-regulation/labeling will not be enough. Hopefully, lawmakers around the world already have an action plan for it.
🎓 Dive deeper, learn with peers, get a certificate
If you enjoy this newsletter and want to dive deeper into some of the topics I cover here and beyond, join the April cohort of our 4-week Bootcamp on Emerging Challenges in Privacy, Tech & AI. Check out the program, read testimonials from some of our 700+ participants, and save your spot here. If this is not the right time, join our waitlist. If you are looking for corporate training, check out our programs.
💻 AI Governance: Key Concepts & Best Practices
If you are interested in AI governance, you can't miss this panel. I invited four experts—Alexandra Vesalga, Kris Johnston, Katharina Koerner, and Ravit Dotan—to discuss emerging issues in the context of AI governance with me. This was a fascinating session full of practical and actionable insights. Watch it on my YouTube channel or listen to it as a podcast.
🎤 Register for my upcoming live panel on the AI Act
The European Parliament has recently approved the AI Act, and as it will soon become law, this is a great time to understand how it will affect tech companies and all of us in practice, as well as some of the challenges and unsolved issues. In this context, I invited three experts on the topic—Luca Bertuzzi, Gianclaudio Malgieri, and Risto Uuk—to join me on April 4 for a fascinating live session covering challenges, opportunities, and practical insights. Register here.
📚 AI Book Club: “The Worlds I See” by Fei-Fei Li
Our AI Book Club has 850+ members, and I've just announced the 4th book we're reading: “The Worlds I See” by Fei-Fei Li. We'll meet in May to discuss it. Interested? Check out our book list and join the AI Book Club here.
🤖 Job opportunities
If you are looking for jobs in privacy or AI, check out our privacy job board and our AI job board, which contains hundreds of open positions.
If you have comments on this week's newsletter edition, I'll be happy to hear them! Reply to this email, and I'll get back to you soon.
If you have friends interested in AI policy & regulation, consider recommending this newsletter to them.
Have a great week!
Luiza