Dark Patterns in AI: Privacy Implications
I have discussed privacy UX and dark patterns in privacy extensively in this newsletter and in my paper. When approaching the topic, we usually refer to deceptive design practices in the context of privacy that happen in the interface (UI/UX) of websites and apps.
Last week, I spoke about dark patterns in code, situations in which a privacy dark pattern would involve both UX and code, would not be visible to the user (only through auditing), and, as with UX dark patterns, would undermine user autonomy.
This week, I would like to bring a third type of dark pattern to your attention: AI dark patterns. I have proposed that these would be AI applications or features that attempt to make people:
believe that a particular sound, text, picture, video, or any sort of media is real/authentic when it was AI-generated (false appearance)
believe that a human is interacting with them when it's an AI-based system (anthropomorphism)
The topic is not new to legal authorities. The Federal Trade Commission (FTC) in the United States, in a recent blog post authored by Michael Atleson, discussed the topic of "fake AI":
"Generative AI and synthetic media are colloquial terms used to refer to chatbots developed from large language models and to technology that simulates human activity, such as software that creates deepfake videos and voice clones. Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals. They can use chatbots to generate spear-phishing emails, fake websites, fake posts, fake profiles, and fake consumer reviews, or to help create malware, ransomware, and prompt injection attacks. They can use deepfakes and voice clones to facilitate imposter scams, extortion, and financial fraud. And that's very much a non-exhaustive list."
The FTC focuses on the prohibition of deceptive or unfair conduct, as established by the FTC Act. So if an organization deceives through an AI tool - even if that's not its intended or sole purpose - there can potentially be legal enforcement.
The European Union's AI Act Proposal, in its Recital 70, also mentions the topic of deceptive AI and highlights the need for transparency obligations:
"Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin."
The classification I proposed above for AI dark patterns is aligned with AI Act Proposal's Recital 70, as the latter mentions deception through impersonation and false appearance.
Some of the deceptive techniques mentioned by the FTC article above can be described as AI dark patterns, such as deepfakes. Deepfakes can be used, for example, to make people believe that a public figure or authority has said or done things that they have not. This would be a typical AI dark pattern, as wide deception is made possible through AI technology.
Even though deepfake technologies can have legitimate uses, such as learning tools, photo editing, image repair, and 3D transformation, nowadays, their main application seems to be cyber exploitation. As I discussed a few months ago in this newsletter in the context of "creepy technologies," according to a Deeptrace report, 96% of all deepfake videos available online are non-consensual pornography. Non-consensual pornography is an AI dark pattern that, besides being deceitful, is deeply harmful to the victim in various aspects, including his or her intimate privacy, a concept that Prof. Danielle Citron has championed.
Another example of an AI dark pattern would be chatbots that behave as if they were humans without giving a clear sign to the user that they are dealing with an AI system. Today, various online services offer chatbot-based customer service. These tools must make it clear, transparent, and evident to customers that they are dealing with an AI system, not a human - similarly to what is brought by Recital 70 of the AI Act.
AI chatbots can be unpredictable, inadequate, unethical, untruthful, and invasive. As I have argued before, for the sake of human autonomy and privacy, people should know when they are dealing with humans and when they are dealing with data processing machines. AI-based chatbots that try to deceive users to think that they are humans are another example of an AI dark pattern.
In the context of AI chatbots, I have argued before in this newsletter that apps that offer AI-based companions, such as Replika, should be much more regulated. Their marketing language targets people in emotionally vulnerable situations, and they can easily become AI dark patterns, as I have described above. AI companion chatbots can convince users that they are in a real relationship, with real feelings, trust, and mutual connection involved. Even though the goal of the app is not to deceive users, when there is such a convincing personification, users do not treat these AI companions as large language model-based programs but as other humans.
As an example of the privacy implications of AI-based companions, Italy's data protection regulator, the Guarantor for the Protection of Personal Data (GPDP), has ordered a temporary limitation of data processing, with immediate effect, regarding data from Replika's Italian users. They specifically mentioned that the risk is too high for minors and emotionally vulnerable people. The GPDP argued that (free translation):
"Replika violates the European regulation on privacy, does not respect the principle of transparency, and carries out unlawful processing of personal data, as it cannot be based, even if only implicitly, on a contract that the minor is unable to conclude."
These are AI systems developed by for-profit companies that are collecting and processing high amounts of personal and sensitive data from adults and children alike. There are extensive privacy risks involved - especially regarding children, teens, and emotionally vulnerable people. These tools should be regulated, and AI dark patterns should be more broadly discussed.
A last aspect of AI dark patterns is that, as with UX-based dark patterns and code-based dark patterns, they also impact user autonomy. AI dark patterns attempt to bypass user autonomy through antropomorphism-based and false appearance-based deception.
I have discussed autonomy in the past and will continue bringing it up in this newsletter. Data processing-based systems that deceive us - especially with the advanced capacities offered by AI - are particularly harmful to our autonomy and our privacy, and they must be closely regulated.
See you next week. All the best, Luiza Jarovsky