Can AI Ever Be Privacy Compliant?
We are currently experiencing an AI momentum, where every task is suddenly expected to be "AI-improved," and every academic article or business talk has to mention ChatG... ok, I will not mention it again. I have seen numerous posts exemplifying practical applications of AI, envisioning how AI is on an accelerated path to change our lives forever, and others openly not impressed.
Microsoft announced a multi-billion dollar investment in OpenAI, and it looks like it is planning to incorporate AI into Word, Outlook, Powerpoint, and other apps. Google does not want to sound late to the party and has made various public declarations on its own AI progress. Apple, Amazon, Meta, and so on seem to be silently plotting their own AI-based big launches. Entrepreneurs everywhere are attempting the next AI-based unicorn.
Meanwhile, outside of the hype and enthusiasm surrounding AI, legislation is advancing. The European Union is working on its AI Act, as I discussed recently in this newsletter. It also issued the European Declaration on Digital Rights and Principles for the Digital Decade, which contains this interesting chapter about "Interactions with algorithms and artificial intelligence systems":
The US White House Office of Science and Technology Policy released a “Blueprint for an AI Bill of Rights”; US states are also working on AI legislation.
There is a lot of movement around AI, and what does it mean, in practice, for privacy professionals?
First, there are interesting potential intersections between AI and privacy. In terms of compliance, there will be occasions (I guess frequent) in which both the GDPR and the AI Act will apply, leading to interesting challenges:
Second, it is still unclear how AI governance will work in practice, especially in the corporate context. According to this insightful IAPP report by Katharina Koerner and Jake Frazier:
"AI and privacy have a key overlap: while required in several areas of law and grounded in Responsible AI principles, explainability, fairness, security, and accountability are also requirements in privacy regulations. (...) It is important for privacy professionals to understand how responsible AI as a governance approach is applied in practice, how it intersects with privacy governance, and how it can learn from privacy"
Are companies going to implement responsible AI together with privacy, and similar principles and methods will apply? What professionals will be in charge of AI governance? How AI accountability will work on an international level?
Third, and this is the point that generates the most concern to me, the way current AI systems are being built and applied - including when they rely on personal data - does not seem to be aligned with privacy principles.
Among the GDPR principles are lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity and confidentiality, and accountability. The GDPR also foresees the "right not to be subject to a decision based solely on automated processing."
Let's begin with transparency: AI systems are not transparent. Our lives are ruled by machine learning algorithms that affect our daily lives in various ways, and we have no information about the criteria used by them or how we have been profiled and targeted by a specific algorithm, even when they use our personal data. We are still far from having transparency and control over how AI systems affect us (and modify our behavior) on a daily basis.
Fairness: when AI systems are biased, and they are prejudicial against certain groups, they are not promoting fairness. Yet, there is no institutional or regulatory "tool" to help individuals to fight against biased systems. What usually happens is that harm occurs, it receives media attention, and then developers of the AI system make changes to answer the public outcry. There is no ex-ante system in play.
The above also brings us to accountability. Who is responsible for biased AI systems? If an AI system is unfair or causes privacy harm, who is responsible? How should organizations be held accountable for the algorithms they use?
The "right not to be subject to a decision based solely on automated processing." There are micro-decisions based on automated processing happening all the time. Every day that goes by, it gets worse. How are we really going to enforce this right to protect individuals, especially vulnerable groups?
And that's the reason why I ask if AI will ever be privacy compliant. It looks like the current "AI hype" is absorbed by its ability to overcome or replace humans but is not focused on making AI privacy compliant or well integrated into a human world with human principles and human rights.
💡 I would love to hear your opinion. I always share the newsletter article on Twitter and on LinkedIn, you are welcome to join the discussion there or send me a private message.
-
📚 Book Recommendation
"The Alignment Problem: Machine Learning and Human Values" by Brian Christian. It brings a much-needed critical approach to AI. Super well-researched and accessible:
-
📄 Paper Recommendation
"The Subjects and Stages of AI Dataset Development: A Framework for Dataset Accountability" by Mehtab Khan & Alex Hanna. Essential paper for privacy experts who want to understand the mechanisms behind AI. Download it (free) here.
-
🔁 Trending on Social Media
-
📌 Privacy & Data Protection Careers
We have gathered relevant links from large job search platforms and additional privacy jobs-related info on this Privacy Careers page. We suggest you bookmark it and check it periodically for new openings. Wishing you the best of luck!
*If you want to add privacy jobs and resources, get in touch.
-
📅 Upcoming Privacy Events
IAPP Global Privacy Summit - April 4th-5th, 2023 - in Washington, DC, United States
Computers Privacy and Data Protection (CPDP) - May 24th-26th, 2023 - in Brussels, Belgium.
Privacy Law Scholars Conference (PLSC) - June 1st-2nd, 2023 - in Boulder, CO, United States.
Annual Privacy Forum - June 1st-2nd, 2023 - in Lyon, France
*To submit additional privacy and data protection events, get in touch.
-
📢 Privacy Solutions for Businesses (sponsored)
In today's newsletter, the featured privacy partner is Simple Analytics, an EU-based analytics service. Click on the image below to learn more about their privacy features. To get your first month for free, use my referral link.
*To get your product or service featured at The Privacy Whisperer, get in touch.
-
✅ Before you go:
Did someone forward this article to you? Subscribe to this newsletter and receive this weekly newsletter in your email.
For more privacy-related content, check out our Podcast and my Twitter, LinkedIn & YouTube accounts.
At Implement Privacy, I offer specialized privacy courses to help you advance your career. I invite you to check them out and get in touch if you have any questions.
See you next week. All the best, Luiza Jarovsky