👋 Hi, Luiza Jarovsky here. Welcome to the 81st edition of this newsletter! Thank you to 80,000+ followers on various platforms and to the paid subscribers who support my work. To read more about me, find me on social, or drop me a line: visit my personal page. For speaking engagements, fill out this form.
✍️ This newsletter is fully written by a human (me), and illustrations are AI-generated. I hope you enjoy reading as much as I enjoy writing it!
A special thanks to Osano, this edition’s sponsor:
For both privacy experts and novices, developing a privacy program can feel like taking a shot in the dark. Whether you're looking to grow an existing privacy program or establish one from scratch, the Osano Privacy Program Maturity Model provides you with key elements to prioritize, critical gaps to plug, and must-have capabilities to develop. Download your copy.
📣 Support this newsletter: are you a privacy-focused organization? Become a monthly sponsor and reach thousands of privacy & tech decision-makers throughout the year. Get in touch.
🪩 Amazon joins the AI party
Amazon has barely launched its new AI chatbot, “Amazon Q,” and employees have been allegedly sounding the alarm on problematic issues, including those involving privacy.
If you have been reading this newsletter in the last few months, you know that privacy compliance (or the lack thereof) is a major issue in the context of generative AI, and there are many unsolved challenges. One of them is the issue of AI “hallucinations” (or AI misinformation), which can lead to reputational harm to individuals and damage to businesses.
A few weeks ago, Microsoft blocked internal access to ChatGPT due to security issues. Various organizations around the world do not allow internal ChatGPT use for the same reason. It soon became clear to OpenAI (ChatGPT developer) and other major AI companies that whoever wants to conquer the AI enterprise market needs to focus on privacy and security issues.
So Amazon decided to join the AI party, perhaps slightly late, and it must prove that it can deliver - beyond other capabilities - enough privacy and security to satisfy business needs.
According to Amazon, Amazon Q is an AI-powered assistant that is designed for work purposes and is tailored to businesses’ needs.
A side comment here is that the current AI wave has entered a new phase, as Amazon joins the market not calling its product “chatbot” or highlighting the chat feature; instead, they characterize it as an assistant and focus on various capabilities that can support developers and IT professionals. (This reminds of Bill Gates’ recent article on the future of AI assistants, which I thoroughly commented on two weeks ago.)
Amazon markets its AI assistant as safer and more private than its competitors. According to Amazon Q's preview website, it “is built with security and privacy in mind to help customers meet their most stringent enterprise needs.”
However, only three days after its launch, according to leaked documents obtained by Platformer, employees have alleged that “Q is experiencing severe hallucinations and leaking confidential data, including the location of AWS data centers, internal discount programs, and unreleased features.”
On these allegations, Amazon told Business Insider that “it had not identified any security issues related to Q and denied that Q had leaked confidential information.”
I would like to draw some comments on that:
first, it's great that employees are paying attention to privacy and security issues and being vocal about them, helping companies alight with user interests;
second, there is still an open discussion if there can ever be generative AI-based systems that have a zero rate of hallucinations. It looks like a percentage of errors is inevitable, and some models will have more errors than others;
third, AI-related hallucinations and misinformation are an integral part of some AI systems and if one error is enough to cause reputational harm or data protection issues, how much should be the minimum to ensure that AI systems respect privacy by design? Should we rethink privacy by design in the context of AI, especially generative AI? Can tech companies advertise that their generative AI products respect privacy?
For now, the AI hype continues high, and companies focus on bold marketing strategies to compete in the AI race.
The new equivalent to “we value your privacy” on privacy policies is “our AI is built with privacy and security in mind” (regardless if it's true or not). As users, consumers, and advocates, we must continue to be watchful.
📢 New AI paper on machine unlearning
The paper "Supporting Trustworthy AI Through Machine Unlearning" was published a few days ago by Emmie Hine, Claudio Novelli, Mariarosaria Taddeo, and Luciano Floridi and brings relevant discussions in the context of AI ethics and privacy.
It proposes machine unlearning as a way to support trustworthy AI and privacy compliance. Some important quotes below:
"Machine unlearning (MU) is not, as its name may suggest, the inverse of machine learning (ML), although they are related. In ML, an algorithm trains a model to perform a task using some data. MU does not involve “forgetting” a task, but how specific data contribute to a model. In other words, it seeks to “undo” the influence of some data on an ML model." (page 1)
-
"MU is valuable also on a collective level, an aspect often overlooked in literature. Regarding the right to be forgotten, unlearning specific datapoints can support group privacy, which could be especially important for marginalized groups and in contexts where group profiling is increasingly common. However, when discussing the limits of MU, we shall see it is crucial to ensure that the process does not compromise the accuracy of models or increase bias, as unlearning can affect classification model accuracy and hence negatively impact the groups it is intended to benefit." (page 5)
-
"MU is a novel subfield of ML that holds great promise as a technical measure to support trustworthy AI. We have argued that unlearning datapoints, features, labels, and classes can help AI applications uphold the OECD’s principles of trustworthy AI and ensure that AI is more sustainable, inclusive, transparent, robust, and accountable. However, it is important to stress that MU cannot compensate for misuses of ML and the lack, or poor quality, of training data." (page 9)
-
Data subjects' rights, such as the right to be forgotten, right of access, and right to rectification, are challenges in the context of machine learning & AI, as I've discussed earlier in this newsletter.
This is a very important topic, and the paper makes an interesting contribution. I am curious to see more studies on the empirical implementation of machine unlearning, as well as practical assessments of its effectiveness.
📱 Video on social media polarization and online hate
Last week, the science YouTube channel Kurzgesagt – In a Nutshell uploaded an interesting 11-minute animation about social media polarization, online hate, and some of the possible explanations that have been raised by recent research. I found it very relevant and timely, and I think you will enjoy it too (the video has amassed 4.4 million views in 5 days). You can watch the video below:
This is an excellent science-focused YouTube channel: informative, entertaining, and esthetically pleasant - especially for those who enjoy animations. Some of their videos about space and astrophysics blew my mind, make sure to check out their channel.
💛 Enjoying the newsletter? Share it with friends and help me spread the word. Let's reimagine technology together.
📎 Job opportunities
Are you looking for a job in privacy? Transitioning to AI governance? There are hundreds of open positions available worldwide. Check out our global privacy job board and AI job board. Good luck!
🍪 National Cookie Day
December 4 was National Cookie Day, and I want to be able to Reject All Cookies with the same ease I can accept them (take a look at the consent banner above: what dark patterns can you notice?).
Below are some useful resources on dark patterns in privacy:
the FTC report Bringing Dark Patterns to Light
the EDPB Cookie Banner taskforce report
Colin Gray, Cristiana Santos, Nataliia Bielova, and Damian Clifford's paper "Dark Patterns and the Legal Requirements of Consent Banners: An Interaction Criticism Perspective"
Johanna Gunawan, David Choffnes, Woodrow Hartzog, and Christo Wilson's paper "Towards an Understanding of Dark Pattern Privacy Harms"
Cristiana Santos and Mark Leiser's paper "Dark Patterns, Enforcement, and the Emerging Digital Design Acquis: Manipulation beneath the Interface"
my paper "Dark Patterns in Personal Data Collection: Definition, Taxonomy, and Lawfulness"
Harry Brignull's website
my own newsletter. There are various articles about the topic in the archive, check it out
my 90-minute privacy training on Dark Patterns and Privacy User Experience, to help privacy professionals, designers, and product managers to understand dark patterns and improve privacy UX practices.
🎓 Privacy & AI training programs
More than 620 professionals have attended our training programs. Each of them is 90 minutes long, led live by me, and you receive additional reading material, 1.5 pre-approved IAPP credits, and a certificate. The last training of the year happens tomorrow (December 6), and registration is also open for mid-January. Read more and save your spot.
📚 AI Book Club
250+ people have registered for our AI Book Club.
In the 1st meeting on December 14, we'll discuss "Atlas of AI," by Kate Crawford;
In the 2nd meeting on January 18, we'll discuss "The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma" by Mustafa Suleyman & Michael Bhaskar.
There will be book commentators, and the goal is to have a critical discussion on AI-related challenges, narratives, and perspectives.
Have you read these books? Would you like to read and discuss them? Join the AI book club.
🖥️ Privacy & AI in-depth
Last week, I spoke with Prof. Ryan Calo about dark patterns, online manipulation, AI chatbots, anthropomorphism, and more. Watch (or listen) to our 1-hour conversation and stay up to date with some of the hottest topics in privacy & AI.
Every month, I host a live conversation with a global expert - I've spoken with Max Schrems, Dr. Ann Cavoukian, Prof. Daniel Solove, and various others. Access the recordings on my YouTube channel or podcast.
💳 Meta's "subscription for no ads" and the future of privacy
One of the most discussed topics in privacy circles has been Meta's new “subscription for no ads” model. This week, I would like to dissect what is at play here and how I think it affects the future of privacy.