💡 AI and Privacy Have More Intersections Than You Think
With the extreme hype and popularity of AI-based tools such as ChatGPT, in the last few months, I wrote various articles in this newsletter dealing with privacy issues in AI.
I spoke about reputational harm, the lack of contextual integrity, the lack of compliance with basic data protection principles, additional challenges when there are vulnerable populations involved, AI governance issues, and my proposed classification of dark patterns in AI. (If you would like to learn more about the topic, join my next course on Privacy & AI).
This week, I would like to comment on another AI-based tool that has been used for more than a decade, and that does not receive as much attention as it should - despite being deeply harmful to privacy: AI-based recommendation algorithms in the context of social networks.
When applied in the context of e-commerce, the proponents of AI-based recommendation systems argue that these recommendations do what sellers and consultants do in the offline world: they help the user understand what they want, navigate the (online) store, compare features, shortlist and filter the most relevant products based on what others will a similar profile have chosen, and buy the product that fits the user's needs.
Despite additional privacy concerns such as excessive tracking, lack of transparency, lack of consent, dark patterns to collect consent, and so on, to a certain extent, I agree that, in the context of e-commerce, there can be a legitimate use for AI-based recommendation systems, especially because:
purpose: the goal of the recommendation system is to suggest products in a specific UI/UX frame (lower manipulative potential);
interactions: there are no synchronous interactions with other users (which would make the user potentially spend hours on the website);
social validation: there are a few or less invasive social validation mechanisms (e.g., user reviews), and the user is not incentivized to behave in a certain way to get immediate social validation;
goal achievement: after the user ends a purchase, there is a “goal achieved,” and most users will move on to other daily activities (instead of staying in an environment of “endless scroll”).
Even in the context of e-commerce, my personal approach is that there should be built-in mechanisms to support users who seem to be vulnerable to specific AI-based recommendation systems. For example, e-commerce, by default, should allow users to set a maximum daily budget or a maximum daily usage time or to block a certain category of products that might trigger compulsive habits and so on.
What I constantly try to express, both in this newsletter and in my academic research, is that technology should, first and foremost, support users' capabilities. Every time a deviation from the norm is detected while the user is interacting with the technology, in the sense of harm to oneself or to others, there should be built-in mechanisms to support the users affected. If a technology does not provide these supportive built-in features, it should be liable for the harm.
Now let's move back to AI-based recommendation algorithms in the context of social networks and the privacy harms nobody talks about.
In social networks such as Facebook, Twitter, LinkedIn, Instagram, TikTok, YouTube, and so on, the AI-based recommendation system typically aims at recommending “who to follow” and “what content to see next.” These are tools with a much higher potential for harm:
purpose: the goal of the recommendation system is to suggest content that will capture the user's attention and make them spend more time online (as these social media platforms rely on ad-based business models). It does not matter the type of content or how it will affect a particular user; what matters is that the user will remain on for as much time as possible in the social network;
optimization: certain types of content catch the user's attention much more efficiently, such as tragic, surprising, shocking, offensive, or polarizing texts, images, and videos. Due to the strong social validation mechanisms that support recommendation algorithms (see below), users are incentivized to post more of this type of content. The AI system will every time optimize for content that is more shocking, more offensive, more polarizing, and so on, building an environment that can quickly become harmful for the people interacting with it.
social validation: AI-based recommendation systems are supported by strong and addictive social validation mechanisms. Content is ranked by the amount of “likes, comments, and shares” it receives. A piece of content will be shown to more people the more social validation it receives, and people will be incentivized to post more content that is tragic, surprising, shocking, offensive, polarizing, and so on to capture other users’ attention and social validation. The interruptive and invasive notifications make users hooked and anxious for their next dopamine hit from likes, comments, and shares.
interactions: there can be potentially multiple continuous synchronous interactions with other users from anywhere in the world. The user can spend the whole day on the social network.
goal achievement: due to the now ubiquitous “endless scroll” feature, there is never a sense of “task accomplished." The user has to by themselves, establish the amount of time they will spend in the social network. Due to well-known restraint bias, users will frequently spend an unhealthy - and potentially harmful - amount of time mindlessly scrolling.
addiction by design: the recommendation systems in social networks are designed to addict through intermittent reinforcement, similar to the way casinos work. Viral and super enticing content (in the sense of shocking, surprising, polarizing, etc.) is shown intercalated with more “uninteresting” or “boring” content (with lower social validation) creating a constant expectation of reward that is similar to the one that characterizes, for example, gaming addiction.
The features above cause various types of harm. The first category is societal harm, and some examples are:
increase in polarization - and social division on regular topics, filter bubbles, sometimes leading to hate and violence;
increase in hate-speech and hate-related crimes - as people have their opinions radicalized online, a percentage of people will attempt to act on;
increase in disinformation - in order to “go viral” or get the desired social validation, people produce their own “shocking” or “tragic” content, sometimes false, and sometimes exaggerating or extrapolating on existing arguments.
A second category of harm we can extract from the features described above is privacy harm, which has been largely ignored by lawmakers and regulators. Examples are:
no control over time online: multiple continuous synchronous interactions with other users, anxiety over the social validation of the content that was posted by the user, constant notifications, and expectation over the highly optimized “viral” content being shown in the “recommended for you” page in an endless scroll. These extreme optimization methods to grab the user's attention and make them lose track of time online go against the basic idea of autonomy and human dignity, central tenets of privacy. Moreover, from a more strictly data protection-related perspective, they go against transparency, data minimization, and fairness. There is no warning on the powerful AI-based algorithms being used and how they can harm human beings - so there is no transparency; also, as the user is being manipulated to spend more time online and share more of his preferences, desires, and intimate thoughts with the social network - so no data minimization; lastly, given the enormous asymmetries between social networks and users and the lack of usable tools to help users to modulate these features in a way that are less harmful - no fairness.
no control over the content that is being shown. As I said above, AI-based recommendation algorithms can “learn” what precise type of content will grab the attention of each specific user and keep showing content that will catch this user's, based on millions of algorithmic A/B tests occurring in real-time around the world. In mainstream social networks, such as the ones I cited above, there is no express choice or a moment where they ask the user what type of content they want to see. In addition to the lack of choice, the recommendation system will inevitably steer users in a certain direction, possibly radicalized, polarized, or hateful, according to the algorithm tunning in each social network. The user does not see these mechanisms working, as there are thousands of tech professionals behind them, making sure they are powerful and subtle and make the user engaged, even if it is a harmful type of engagement. So users are again in a helpless situation, without autonomy, choice, or transparency. Unknowingly, users have their personal data heavily harvested to teach and feed the recommendation system; the recommendation system, in turn, manipulates this same user's opinions, emotions, and feelings according to each social media's algorithmic programming. There are no warnings or transparency about how the recommendation mechanisms are working in real time and how they can negatively affect users; there are also no effective control mechanisms to avoid personal harm to users.
The idea of a broader notion of privacy harms, including those affecting autonomy, emotions, relationships, etc., was advocated by Profs. Citron and Solove in their paper “Privacy Harms,” which I recommend everyone to read.
If we want to support a human dignity-based notion of privacy, as Prof. Luciano Floridi has argued, we need to focus on human capabilities and regulate/ prohibit situations in which technology and data-intensive tools make people helpless and submissive to the wishes of the companies behind these tools.
AI tools that have the ability to collect user data and “learn” the user's behavior and, from these inferences, become super persuasive and manipulative are a threat to privacy. Autonomy, the ability to choose how to behave online and offline, as well as transparency and fairness, should be much more closely protected.
AI-based recommendation systems in social networks are unregulated and absolutely neglect user privacy. One of the reasons for that - as I have shared on various occasions in this newsletter - is that privacy laws around the world still disregard autonomy harm as a central type of privacy harm in the digital age.
This must change.
All the best, Luiza Jarovsky