๐ Hi, Luiza Jarovsky here. Welcome to the 197th edition of my newsletter, empowering a 59,200+ strong AI governance community.
Paid subscribers never miss my analyses of AI's legal and ethical challenges, covering the latest developments in AI governance:
For more: 4-Week Training | Learning Center | Forum | Job Board
๐ก Before we start: if you enjoy this newsletter and would like to explore AI's legal and ethical challenges in an in-depth and interactive way, consider joining my AI Governance Training. It's a 4-week, 15-hour, live online program led by me.
I've trained over 1,200 professionals, and each cohort is limited to 30 participants. Many have found the experience transformative (see testimonials), and it might be the career boost you need. I hope to see you there!
๐๏ธ OpenAI Ignores Privacy by Design
Something that's been clear since the beginning of the generative AI wave is that AI companies like OpenAI expressly ignore privacy by design. They often build AI tools and features that can put usersโ privacy at risk.
I wrote back in April 2023 that OpenAI chose not to implement privacy by design; instead, they followed a Privacy-by-Pressure approach. Two years have passed, and this, sadly, remains true.
In today's edition, I highlight essential privacy settings in general-purpose AI systems like ChatGPT, which companies often obfuscate, and most people are unaware of.
I also discuss privacy-preserving behavior while using AI, some new privacy-invasive capabilities and trends everyone should be aware of, and my thoughts on the future of privacy UX and responsible AI design.
*A reminder: staying as private as possible while using AI requires multiple steps and sometimes behavioral changes.
Why?
First, privacy-invasive features are often ON by default.
Second, the design of AI systems is usually not optimized for privacy protection, transparency, or awareness. (More on this below)
Opting Out of AI Training
Most data used to train AI comes from web scraping, where no express user consent is involved.
With that in mind, if you post personal or sensitive content about yourself, friends, or family on the internet (for example, on social media), I recommend checking their AI policies and considering opting out of AI training.
Unfortunately, it's becoming harder to opt out, even for people in the EU who are covered by the General Data Protection Regulation (GDPR), a stricter data protection framework.
As an example, last week, Meta announced they will also use data from EU-based users to train AI. The announcement includes no express mention of tools to opt out (which were available before).
Turning Off Model Training
Many people don't realize that while using AI systems like AI chatbots, model training is often โONโ by default. This means that all the data you input (your prompts) is used to train the underlying AI model.
Since most people will likely forget to be extremely cautious with the information they input into the system, I generally recommend deactivating model training.
If you use ChatGPT, you can do this by visiting settings, clicking on data controls, and turning off the toggle labeled โimprove the model for everyone.โ
Deactivating Memory
A feature recently introduced by OpenAI, now copied by xAI and others, is โmemory.โ When enabled, the AI chatbot will โrememberโ past conversations to make future interactions more personalized.
On ChatGPT specifically, memory works through โSaved Memoriesโ and โChat history.โ Given the privacy risks of allowing an automated system to store large amounts of personal data (including the potential for data leakage or adversarial techniques), I recommend deactivating the memory feature as well.
Privacy Tricks
Even if you have turned off model training, memory, and other privacy-invasive features, privacy-aware behavior while using AI remains essential, but most people fail at it. Why?