When Privacy by Design is Forgotten
Recently there have been two episodes that made me question if tech companies still take privacy by design seriously and if GDPR Article 25 means anything at all to them.
As a reminder, privacy by design, the framework developed by Dr. Ann Cavoukian, has seven main principles:
1. Proactive, not Reactive
2. Privacy as the Default Setting
3. Privacy Embedded into Design
4. Full Functionality — Positive-Sum, not Zero-Sum
5. End-to-End Security — Full Lifecycle Protection
6. Visibility and Transparency
7. Respect for User Privacy — Keep it User-Centric
Last month, Dr. Cavoukian was a guest at my 'Women Advancing Privacy' live event, you can listen to the recording of our conversation about privacy by design in the age of AI.
The GDPR has officially adopted a similar concept: data protection by design and by default. According to the GDPR Article 25:
"Taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing, the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects.
The controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. In particular, such measures shall ensure that by default personal data are not made accessible without the individual’s intervention to an indefinite number of natural persons."
Having said that, recent events have made me question if privacy by design actually means anything in practice.
The first event was OpenAI's overall data protection strategy for ChatGPT.
In this newsletter, I have discussed extensively the possible privacy issues involved in AI-based chatbots. From the infringement of data protection principles to reputational harm, negligence with data subjects' rights, dark patterns in AI, and disregard for fairness, it is clear that those AI chatbots now broadly available to the public need a solid risk assessment, privacy assurances and privacy by design.
Nevertheless, OpenAI has not been explicit or specific on how they will deal with issues that I and various other privacy professionals have been raising in the last few months. There have not been meaningful public announcements or clarifications, and recently there was a privacy incident where people's chat histories were exposed to other users. After this incident, a more obvious change I noticed was this warning before someone could access ChatGPT.
It is good that they gave one (small) step towards more transparency. But only that, and only now? They have nothing else to tell people or to embed into the design of their product so that we can have better privacy assurances and transparency? As I questioned on social media: is the new privacy paradigm privacy by pressure? Privacy by reaction?
Perhaps as a result of this lack of proactivity, data protection authorities have recently started acting. The Italian Data Protection Authority ("Garante per la Protezione dei Dati Personali") imposed an immediate temporary limitation on the processing of Italian users' data by OpenAI. It looks like more authorities - at least in Europe - will follow suit.
Despite these expected enforcement-related advances, I am still in disbelief about why companies like OpenAI do not take privacy by design more seriously.
The second recent episode - indeed very recent, as I first heard about it yesterday - was Meta's new publicly available "opt-out" created to allow people, in theory, to opt out of data processing for behavioral advertising.
If you are a privacy professional, you are probably familiar with the recent episodes in the context of the Max Schrems/noyb vs Meta privacy litigation. It was recently decided that Meta could not use 'contract' as a lawful basis to collect and process user data, so they seem to be now relying on 'legitimate interest' (although noyb is legally questioning that).
To consolidate the legitimate interest strategy, they now offer an opt-out form so that those interested can request not to have their data processed in the context of behavioral advertising. The problem is: such an opt-out form is hidden and difficult to understand. According to noyb:
"Instead of providing a simple "switch" or button to opt-out, Meta requires users to fill out a hidden form. In this form users have to argue why they want to perform an opt-out. Users have to identify each purpose for which Meta argues a 'legitimate interest' and then explain why Meta's assessment - which is not public - was wrong in their individual case. It is highly unlikely that any normal user would be able to argue these points effectively."
Aren't transparency & fairness data protection principles? Shouldn't it be easy for users to opt out of targeted advertising? Does privacy by design matter at all?
Consumers say privacy is important and that they are concerned about how companies use data about them. Privacy by design, including privacy UX and fostering a privacy-enhancing user experience, are great tools to help implement a privacy compliance plan. It is unclear to me why some tech companies do not think so yet. Are they waiting for more fines?
💡 Interested in diving deeper into Privacy by Design and Privacy UX? The course Privacy UX: The Fundamentals is coming soon, sign up for the waitlist and get 20% off when it's launched. Visit our site to check our professional privacy courses.
🎤 Upcoming events
In the next edition of 'Women Advancing Privacy', I will discuss with Prof. Nita Farahany her new book "The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology," as well as issues related to the protection of cognitive liberty and privacy in the context of current AI and Neurotechnology challenges.
Prof. Farahany is a leader and pioneer in the field of ethics of neuroscience. This will be a fascinating conversation that you cannot miss. I invite you to sign up for our LinkedIn live session and bring your questions.
To watch our previous events (the latest one was with Dr. Ann Cavoukian on Privacy by Design), check out my YouTube channel.
In the latest episode of The Privacy Whisperer Podcast, I spoke with Romain Gauthier, the CEO of Didomi, about:
His journey as an entrepreneur and the specific challenges of privacy tech
The evolution of the privacy industry and how individuals and privacy vendors have adapted to new regulations and challenges
What is his view on current trends and the future of privacy
Tips for small businesses that want to do privacy right
This was a fascinating conversation. If you work in the tech industry, are a privacy professional, or are an entrepreneur, you cannot miss it. Listen now.
🔁 Trending on social media
Which one is your favorite? Answer here.
📌 Privacy & data protection job search
We have gathered various links from job search platforms and privacy-related organizations on our Privacy Careers page. We are constantly adding new links, so bookmark it and check it once a week for new openings. Wishing you the best of luck!
✅ Before you go:
If you enjoy this newsletter, invite your friends to subscribe to The Privacy Whisperer.
Do you want more in-depth content? At Implement Privacy, I offer professional privacy courses on emerging privacy topics, check them out.
See you next week. All the best, Luiza Jarovsky