👋 Hi, Luiza Jarovsky here. Welcome to the 92nd edition of this newsletter on privacy, tech & AI, read by 18,559 email subscribers in 115+ countries.
📲 For daily privacy, tech & AI content, join 63,861 followers on LinkedIn, X, YouTube, Instagram, Threads & TikTok (and come say hi).
A special thanks to hoggo, this edition's sponsor. Check them out:
The average company engages with over 130 vendors and shares personal data with them, including sensitive customer data. What steps can you take to ensure that your customers' data is in good hands, as you are required to do under the GDPR? You can simply look up your vendors on hoggo and assess their privacy and security practices within minutes. This way, you can easily spot high-risk service providers and find trustworthy alternatives. It's that simple. Join hoggo, it's free
⛔ Meta's AI training practices are questionable
As we navigate the generative AI wave - and probably passed its peak - the topic of AI training is still surrounded by many legal and ethical uncertainties.
Let's use Meta as an example. During their Q4/2023 earnings call, which happened on February 1, Mark Zuckerberg nearly bragged about the amount of data he could use to train their AI systems:
“On Facebook and Instagram there are hundreds of billions of publicly shared images and tens of billions of public videos, which we estimate is greater than the Common Crawl dataset and people share large numbers of public text posts in comments across our services as well. But even more important than the upfront training corpus is the ability to establish the right feedback loops with hundreds of millions of people interacting with AI services across our products.”
AI training - like any other practice that involves the collection and processing of personal data - must happen according to the law. In this context, the FTC, in their blog post from February 13, stated that:
“It may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy.”
In this context, the legality of Meta's AI training practices is unclear to me. On this page, Meta says that they rely on user data to train their models. However, in their privacy policy, especially in the section "How do we use your information," there is no mention of AI training. In fact, to my surprise, there was not a single mention of the word "AI" in their privacy policy.
Additionally, as I mentioned in my recent video, AI training is a separate processing purpose, unrelated to Meta's main services, and people should have the right to choose not to be part of it with their personal information.
Whatever is happening here deserves a further look from data protection authorities.
🎤 Join our upcoming live session on AI governance
If you are interested in AI, you can't miss our upcoming live session. I invited four experts - Alexandra Vesalga, Kris Johnston, Katharina Koerner, and Ravit Dotan - to discuss with me emerging issues in the context of AI governance. This will be a fascinating session, full of practical and actionable insights. Read more about the topics we'll cover, register here, and join us live.
➡️ Privacy & AI leadership training
I would welcome the opportunity to upskill your privacy team. Some of my expertise topics are (1) legal issues in AI, (2) privacy & AI, (3) AI deepfakes, (4) AI manipulation, (5) dark patterns in privacy, (6) privacy UX, and (7) privacy-enhancing design. The feedback has been extremely positive - get in touch, and let's discuss your needs.
🎬 Tackling dark patterns & online manipulation
You can't miss this month's podcast episode with Prof. Cristiana Santos and Prof. Woodrow Hartzog on dark patterns and online manipulation. You can watch our conversation on my YouTube channel or listen to it on my podcast.
🤖 Jobs in privacy & AI
If you are looking for a job or know someone who is, check out our privacy job board and our AI job board, containing hundreds of open positions. In addition to that, we send a weekly email alert with selected job openings; visit the links above and subscribe.
🎓 Become a leader in privacy & AI
If you enjoy this newsletter, you will love our 4-week Privacy, Tech & AI Bootcamp. The first cohort has recently ended, and the LinkedIn posts celebrating the certificates have filled us with gratitude for being part of so many thriving professionals' journeys:
We launched two new cohorts starting in March: read the full program and register here (seats are limited). *We have special discounts for NGO members, teams (4+ people), and people who cannot afford the fee: get in touch and register today.
📚 AI Book Club: 760+ members
Interested in AI? Love reading and would like to read more? Our AI Book Club is for you! There are 760+ people registered, and we are currently reading “Unmasking AI,” by Dr. Joy Buolamwini. We'll meet on March 14 to discuss it. During the 1-hour meeting, five book commentators will share their perspectives, and everybody can join the discussion. Join the AI Book Club here.
🔍 Kal.AI.doscope #3: Google Gemini
Gemini, Google's AI tool, brought an unexpected outcome, and it's not what most people are talking about. I explain in 4 minutes - watch:
Every week, I share a short video with my commentary on an AI-related topic. Watch the full Kal.AI.doscope playlist here.
🚀 AI Briefing
Paid subscribers support my independent work, get 20% off on the 4-week Bootcamp, and will receive another email with the AI Briefing, my commentary on the most important AI topics of the week - especially from a data protection and AI governance perspective. I strongly recommend it for those working in privacy & AI. Choose a paid subscription here.
I'm happy to hear your thoughts about this week's edition: you're welcome to get in touch, and I'll get back to you soon.
I wish you a great week, and see you next Tuesday - Luiza