👋 Hi, Luiza Jarovsky here. Welcome to the 79th edition of this newsletter! Thank you to 83,000+ followers on various platforms and paid subscribers who support my work. To get in touch, invite me to speak, or become a sponsor: visit my personal page.
🛍 BLACK FRIDAY OFFER: we are offering a 20% discount on our Masterclasses until this Saturday at 11:59 pm PT. Use the code BF2023 when registering for any of our Masterclasses and get 20% off. Save your spot today!
✍️ This newsletter is fully written by a human (me), and illustrations are AI-generated.
A special thanks to Didomi, this edition's sponsor:
In a world where data privacy regulations have proliferated, managing privacy has become increasingly complex. Didomi’s Global Privacy UX Solutions are built to simplify this challenge. Check out how Didomi can help you manage regulatory requirements, balance user expectations and streamline privacy across the business - read the article.
🎬 YouTube's crackdown against AI deepfakes
YouTube published a blog post announcing various measures to label AI-generated content and deal with AI deepfakes. Among them:
YouTube will introduce features that inform viewers when the content they’re seeing is synthetic, such as when it was created or modified by AI. They will focus on "realistic" content;
they explicitly said that creators who consistently choose not to disclose if their content was synthetically modified might be subject to content removal, suspension from the YouTube Partner Program, or other penalties;
YouTube will inform viewers that the content is synthetic in two ways:
- a new label will be added to the description;
- for certain types of content involving sensitive topics, they will apply a more prominent label;in cases where a label alone may not be enough to mitigate the risk of harm, the synthetic media will potentially be removed;
creators and artists will be able to request the removal of AI-generated content that simulates an identifiable individual (they will have a "privacy request process");
musicians will be able to request the removal of AI-generated content that mimics an artist’s unique singing or rapping voice;
Below is a screenshot from their blog post where you can see a mockup showing a label added to the description:
It's an interesting development to help curb harmful AI-based deepfakes, and I'm curious to see how it will work in practice. For now, I can foresee three practical challenges:
how are these labels going to be integrated into YouTube's user experience (UX) interface? There are many users that don't access a video's description, so disclosing only there does not seem enough to me;
how will “realistic” be defined, and what will its boundaries be? This is extremely important, as labeling will only be applied to synthetic content that is aiming to appear “realistic,” this is extremely important;
what topics will be considered sensitive and automatically removed in case of AI deepfakes, regardless of the existence of a label?
I would love to hear your thoughts!
*For those who want to dive deeper into the topic, sign up for our new 90-minute live Masterclass AI Deepfakes, Anthropomorphism, and Privacy Implications, which will be held on December 6th.
🚨 The current state of cookie banners
I've recently come across this cookie banner:
In summary:
"Your privacy, your choice"
"When accepting optional cookies, your personal data will be transferred to unrelated third parties, with unknown data protection standards"
"You must accept all cookies"
My comment is that sometimes choice really means nothing, and this is not how we should deal, in practice, with “notice and choice.”
I've written extensively about dark patterns in privacy - you can read previous newsletter articles and my academic research. If you want to dive deeper into the topic, join my next 90-minute live Masterclass on Dark Patterns & Privacy User Experience. Places are limited, save your spot.
📌 Job opportunities
Are you looking for a job in privacy? Transitioning to AI governance? Check out hundreds of opportunities on our global privacy job board and AI job board. Wishing you good luck!
🎓 MASTERCLASS: AI Deepfakes, Anthropomorphism and Privacy
Last week, we launched our new Masterclass: AI Deepfakes, Anthropomorphism, and Privacy Implications. Join us on December 6 for this unique 90-minute live course that will help you get ahead of emerging AI challenges. You will receive additional reading material, 1.5 pre-approved IAPP credits, and a certificate. Places are limited: watch the teaser, read more, and save your spot. To check all available Masterclasses or organize a group session: visit our website.
📚 Join our AI Book Club
200+ people have registered for our AI Book Club and received the invitation to the 1st meeting on December 14. This is what will happen:
We'll discuss Kate Crawford's book "Atlas of AI"
There will be 6 book commentators: Adeteju Enunwa, Dominga Leone, Gregory Manwelyan, Marlon Domingus, Melanie Tan, and Oana Iordachescu
The session will last around 1 hour
You'll broaden your perspective on emerging AI-related challenges and will listen to what some of your peers are thinking
You'll meet people who are interested in better understanding and getting ready for "the age of AI"
It will be fun
If this sounds interesting to you, join us, and share with your network. See you there!
🔔 New article on AI regulation
Profs. Gianclaudio Malgieri & Frank Pasquale have recently published the article "Licensing high-risk artificial intelligence: Toward ex ante justification for a disruptive technology," which I recommend you read in full. Below are some important quotes:
"Thanks to the well-recognized “black box” problem, identifiable AI abuses are only the tip of an iceberg of problems. AI systems can be opaque, nonlinear, and unpredictable, and they evolve rapidly. This makes it difficult to keep ex-post, reactive regulations up to date with the latest technological advances." (page 1)
"To summarise, our proposal here is to impose a licensure model on the providers of AI systems. The conformity assessment model for high- risk AI systems in the proposed AI Act is a good starting point. However, considering the limited scope of the conformity assessment (limited to the high-risk systems), the limited transparency of the justification documents produced by AI providers, and the limited principles to which the AI providers should prove compliance in the conformity assessment (limited reference to fairness, data protection, vulnerable users’ protection) we propose that the AI Act and any AI regulation across the world might be based on a more comprehensive licensure model based on AI justification." (page 11)
"Without proper assurances that the abuse of AI has been foreclosed, citizens should not accede to the large-scale application of AI now underway. Not only ex post enforcement but also ex ante licensure procedures are necessary to ensure that AI is only used for permissible purposes and is “justified” , i.e. is not merely “explainable” but also lawful, fair, non-biased, non-manipulative, non-discriminatory, secure, and purpose-limited, respecting both data minimisation and storage limitation requirements." (page 15)
As the AI Act is still being debated and the future of AI regulation is uncertain, the "ex-ante" approach and the presumption of unlawfulness for high-risk AI models are interesting proposals. I would love to hear your thoughts.
🫶 Enjoying the newsletter?
Refer to your friends. It only takes 15 seconds (writing it takes me 15 hours). When your friends subscribe, you get free access to the paid content.
🖥️ Privacy & AI in-depth
On November 28, I will talk with Prof. Ryan Calo about Humans, Robots, and Vulnerability in the Age of AI. We'll discuss his recent paper with Daniella DiPaola, Socio-Digital Vulnerability, and other topics in the context of Prof. Calo's scholarship. To join the session, register here. Every month, I host a live conversation with a global expert - I've spoken with Max Schrems, Dr. Ann Cavoukian, Prof. Daniel Solove, and various others. Access the recordings on my YouTube channel or podcast.
🤖 The OpenAI drama
Emmett Shear, Twitch's co-founder, is OpenAI's 3rd CEO in 48hs, and Sam Altman will join a new AI research team at Microsoft. It's still unclear what led to this, but don't underestimate the importance of AI regulation.
📝 Are you a DPO or GDPR expert?
Noyb, led by Max Schrems, is running this anonymous survey on 5+ years of GDPR compliance, and they would appreciate your input. It takes 5-10 minutes to finish; if you would like to help, use this link.
🚫 Sharenting: privacy & other harms
As I've written previously in this newsletter: the internet is not a safe place for kids, and we should normalize never posting anything about children online.
See my infographic below, and please spread the message. You can also share my post about the topic on LinkedIn and X.
📎 My perspective on Bill Gates' AI forecasts
Bill Gates has recently published an article on GatesNotes with his vision for the future of AI. Below I analyze his post and share my perspective: