👋 Hi, Luiza Jarovsky here. Welcome to the 95th edition of this newsletter, read by 20,000+ email subscribers in 115+ countries. I hope you enjoy reading it as much as I enjoy writing it.
A special thanks to hoggo, this week's sponsor. Check them out:
Vendor assessments are crucial to protect your company from third-party risks. When done manually and without structure, these assessments can take weeks (or months), drain resources, and produce inconsistent results. When it comes to privacy, you don't want to rely on luck - and with hoggo, there's no reason to. hoggo lets you assess the privacy and security maturity of your potential vendors so you can easily compare them and make sure that you’re engaging with trustworthy ones. Join now, it’s free.
🚨 Europe is acting fast
Last week, the EU was all over the news with the European Parliament's approval of the AI Act. The world is watching the AI Act closely, and my post summarizing some of its main points amassed almost 2 million views on X/Twitter.
Despite the buzz, it will still take some time for the AI Act's rules to become enforceable. The AI Act is expected to officially become law by May or June 2024, and its provisions will start taking effect in stages:
➵ 6 months later: EU countries will be required to ban prohibited AI systems
➵ 1 year later: rules for general-purpose AI systems will start applying
➵ 2 years later: the whole AI Act will be enforceable
-
The AI Act is not the only European legal framework impacting technology companies. The Digital Services Act (DSA) - which entered into force a few months ago - is also being enforced:
Under the DSA, the European Commission designated 17 very large online platforms (VLOPs) and 2 very large online search engines (VLOSEs), and the EU is speeding up enforcement against them. This is what happened in the last few days:
🇪🇺 The European Commission sent formal requests for information to Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube, and X, asking for more information on how they are mitigating risks linked to generative AI on their services, especially regarding:
➵ AI "hallucinations”
➵ The viral dissemination of deepfakes
➵ The automated manipulation of services that can mislead voters.
-
🇪🇺 The European Commission opened formal proceedings against AliExpress to verify, among other practices, the company's:
➵ "Lack of enforcement of their terms of service prohibiting certain products posing risks for consumers' health (such as fake medicines and food as well as dietary supplements) and for minors specifically (access to pornographic material), which consumer can still find on the platform;
➵Lack of effective measures to prevent dissemination of illegal content;
➵Lack of effective measures to prevent intentional manipulation on the online platform through so-called ‘hidden links';
➵ Lack of effective measures to prevent risks deriving from features, such as influencers promoting illegal or harmful products through the 'Affiliate Programme'"
➵ and more.
If proven, these failures would constitute infringements of Articles 16, 20, 26, 27, 30, 34, 35, 38, 39 and 40 of the DSA. - Fines can be up to 6% of the company's global turnover.
-
🇪🇺 The European Commission has formally requested information from LinkedIn to understand how the company is complying with the prohibition on showing ads based on profiling using special categories of personal data, such as sexual orientation, political opinions, or race.
Beyond the DSA, there are other enforcement actions against tech companies taking place in Europe:
🇮🇹 The Italian Competition Authority has imposed a 10 million Euro fine on TikTok. They concluded that the company:
"(...) failed to implement appropriate mechanisms to monitor content published on the platform, particularly those that may threaten the safety of minors and vulnerable individuals. Moreover, this content is systematically re-proposed to users as a result of their algorithmic profiling, stimulating an ever-increasing use of the social network."
-
Despite all the attention to the “European way” when regulating tech, especially AI, other countries have chosen different paths.
🇮🇳 India, for example, chose to let generative AI developers “self-regulate” and label their own products.
On Friday, a new advisory of the Ministry of Electronics and Information Technology in India established that:
“Under-tested/unreliable Artificial Intelligence foundational model(s)/ LLM/Generative Al, software(s) or algorithm(s) or further development on such models should be made available to users in India only after appropriately labeling the possible inherent fallibility or unreliability of the output generated”
This new advisory dropped the requirement to obtain government permission to make their products available to users in India, as the previous official rules required.
It will be interesting to compare how AI development and governance will happen in India vs. the EU.
If you have friends interested in tech policy and AI regulation, consider sharing this newsletter article with them:
🎓 Join the 5th edition of the Bootcamp
Registration for the April edition of our 4-week Bootcamp on Emerging Challenges in Privacy, Tech & AI is open! The Bootcamp is a great opportunity to explore some of the most fascinating topics in privacy and AI, acquire new skills, and advance your career. The live sessions will happen on Thursdays at 1pm ET (6pm UK time) starting on April 11.
Check out the full program and save your spot here. If this is not the right time, you can join our waitlist to be notified of upcoming training programs.
💻 AI Governance: Key Concepts & Best Practices
If you are interested in AI governance, you can't miss this panel. I invited four experts—Alexandra Vesalga, Kris Johnston, Katharina Koerner, and Ravit Dotan—to discuss emerging issues in the context of AI governance with me. This was a fascinating session full of practical and actionable insights. Watch it on my YouTube channel or listen to it as a podcast.
❝ Weekly quote
“Knowing is not enough, we must apply. Willing is not enough, we must do” - Johann Wolfgang von Goethe
📚 AI Book Club: “The Worlds I See” by Fei-Fei Li
Our AI Book Club has 830+ members, and I've just announced the 4th book we're reading: “The Worlds I See,” by Fei-Fei Li. We'll meet in May to discuss it. Interested? Check out our book list and join the AI Book Club here.
🎤 Register for my upcoming live panel on the AI Act
The European Parliament has recently approved the AI Act, and as it will soon become law, this is a great time to understand how it will affect tech companies and all of us in practice, as well as some of the challenges and unsolved issues. In this context, I invited three experts on the topic—Luca Bertuzzi, Gianclaudio Malgieri, and Risto Uuk—to join me on April 4 for a fascinating live session covering challenges, opportunities, and practical insights. Register here.
🔍 Kal·AI·doscope #6
Why is Mira Murati's (OpenAI's CTO) interview problematic? What's in common between cakes and AI systems? I explain it in 1 minute, watch:
Every week, I share a short video with my commentary on an AI-related topic. Watch the full Kal·AI·doscope playlist here.
🎓 Privacy & AI training
I would be happy to speak at your event or train your team. You can choose one of our standard programs or contact me to discuss a different format.
🤖 Job opportunities
If you are looking for a job or know someone who is, check out our privacy job board and AI job board, which contain hundreds of open positions. We also send a weekly email alert with selected job openings; visit the links above and subscribe.
If you have comments on this week's newsletter edition, I'll be happy to hear them! Write to me, and I'll get back to you soon.
Have a great week!
Luiza