The practice of manipulation is not new. It is perhaps as old as human beings. The tools and techniques to manipulate humans have also evolved, and, in recent years, especially with the advancement of AI, they have reached a new worrying high.
Susser, Roessler & Nissenbaum wrote that to manipulate someone means "intentionally and covertly influencing their decision-making by targeting and exploiting their decision-making vulnerabilities.”
This is a great definition, and I have adopted it in my research about dark patterns in privacy. I have defined dark patterns in privacy as:
"user interface design practices that manipulate the data subject’s decision-making process in a way detrimental to his or her privacy and beneficial to the service provider"
Dark patterns are a great field to study manipulation, as they are manipulative techniques applied through user experience (UX) design.
Manipulation, however, can happen anywhere. For example, between two people in a cafe, when one is using language, facial expressions, voice, posture, etc., to make the other person act in a certain way, according to the wishes of the manipulator.
When we are talking about UX dark patterns, we see that there is an additional layer of complexity that makes the manipulation more difficult to detect: the UX interface.
Behind the UX interface of an app or website, especially if we are talking about large companies, there are specialized data science, UX, product, and marketing professionals measuring the behavior, emotions, and typical actions of users to decide what is the best color, font, format, size, flow and so on to obtain a certain behavior from the user. Therefore, the UX layer serves as a way to perpetrate dark patterns.
Regardless of the layer / the means through which it happens, the essence of manipulation is the exploitation of cognitive biases.
To illustrate the variety of ways we can be manipulated, see this cognitive bias codex:
Cognitive biases were intensely discussed by cognitive psychologists such as Kahneman and Tversky, who identified that: "people rely on a limited number of heuristic principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations. In general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors."
Cognitive biases are inherently human. They can help us navigate choices and complexity in day-to-day life, however, they also make us prone to errors and manipulation.
Moving from UX to AI manipulation, a few weeks ago, in this newsletter, I defined dark patterns in AI as AI applications or features that attempt to make people:
believe that a particular sound, text, picture, video, or any sort of media is real/authentic when it was AI-generated (false appearance)
believe that a human is interacting with them when it's an AI-based system (impersonation)
These types of manipulative practices have suddenly inundated the internet. Continuing the idea of the layer of complexity, AI-based manipulation is even harder to detect. The techniques applied through the algorithmic layer, such as the two I cited above, affect the human brain in a more invasive way and are more difficult to detect than those applied in the UX layer.
The typical example of the first type of dark pattern in AI (false appearance) is deepfakes.
Deepfakes use technology to imitate reality so well that it is extremely hard to distinguish fake from real.
Deepfakes can easily manipulate us, and we are all experiencing it in the last few months with the explosion of AI-based services such as Midjourney and DALL-E. The internet is now inundated with images and videos that people cannot tell if they are real or not (remember the pope with the puffy coat?).
Nowadays, when I open Twitter, I frequently question myself if the content I am seeing is real or AI-based fake. Deepfakes are everywhere.
Scammers are using AI-based voice tricks to extort people, pretending they are family members in distress. A report has shown that 83% of Indians have lost money in this type of scam.
Even though deepfake technologies can have legitimate uses, such as learning tools, photo editing, image repair, and 3D transformation, nowadays, their main application seems to be cyber exploitation.
As I discussed a few months ago in this newsletter in the context of "creepy technologies," according to a Deeptrace report, 96% of all deepfake videos available online are non-consensual pornography. Non-consensual pornography is an AI dark pattern that, besides being deceitful, is deeply harmful to the victim in various aspects, including his or her intimate privacy, a concept that Prof. Danielle Citron has championed.
The typical example of the second type of dark pattern in AI (impersonation) is AI-based companions.
Today, various online services offer chatbot-based customer service. These tools must make it clear, transparent, and evident to customers that they are dealing with an AI system, not a human - similarly to what is brought by Recital 70 of the AI Act.
In the context of AI chatbots, I have argued before that apps that offer AI-based companions, such as Replika, should be much more regulated. Their marketing language targets people in emotionally vulnerable situations, and they can easily become manipulative.
AI companion chatbots can convince users that they are in a real relationship, with real feelings, trust, and mutual connection involved. Even when the goal of these apps is not to deceive users, if there is such a convincing personification, users do not treat these AI companions as large language model-based programs - they interact with them as other humans.
As an example of the privacy implications of AI-based companions, Italy's data protection regulator, the Guarantor for the Protection of Personal Data (GPDP), has ordered a temporary limitation of data processing, with immediate effect, regarding data from Replika's Italian users. They specifically mentioned that the risk is too high for minors and emotionally vulnerable people. The GPDP argued that (free translation):
"Replika violates the European regulation on privacy, does not respect the principle of transparency, and carries out unlawful processing of personal data, as it cannot be based, even if only implicitly, on a contract that the minor is unable to conclude."
The FTC has also strongly advised against AI-based manipulation. In an FTC blog post from May 1st, Michael Atleson, attorney of the FTC division of advertising practices, wrote:
"A tendency to trust the output of these tools also comes in part from 'automation bias,' whereby people may be unduly trusting of answers from machines which may seem neutral or impartial. It also comes from the effect of anthropomorphism, which may lead people to trust chatbots more when designed, say, to use personal pronouns and emojis. People could easily be led to think that they’re conversing with something that understands them and is on their side."
This is an excellent blog post in which the FTC makes clear that they are watching how companies use AI-based technologies and that these uses should not be manipulative, unfair, or harmful in any way.
AI-based manipulation worries me immensely, as it can be harmful to privacy, autonomy, and democracies. It can hurt both the social fabric and what means to be human. Is it possible to effectively tackle it? Can we ever be free again?
💡 Advance your career: join the waitlist for my new courses on Privacy & AI and Privacy UX and get a 20% discount when they launch.
--
🎤 Upcoming event (435+ people confirmed)
In the 4th edition of our Women Advancing Privacy series, I will discuss with Dr. Gabriela Zanfir-Fortuna topics around AI Regulation and how the EU and the US are dealing with the recent challenges brought by AI-based technology.
Dr. Zanfir-Fortuna is VP for global privacy at the Future of Privacy Forum and a global leader in privacy and data protection. This is THE talk you need to join about AI regulation and what to expect for the next few months. So far, 435+ people have already confirmed - sign up here and prepare your questions.
You can also watch the recording of our previous events on my YouTube channel - the most recent talks were with Dr. Ann Cavoukian on Privacy by Design and with Prof. Nita Farahany on Brain Privacy.
--
📌 Privacy & data protection jobs
We gathered various links from job search platforms and privacy-related organizations on our Privacy Careers page. We are constantly adding new links, so bookmark it and check for new openings. Wishing you the best of luck!
--
📢 Privacy solutions for businesses (sponsored)
You know how individuals have a credit score? Well, now businesses have a privacy score with Ubiscore. Just like a credit score, the Ubiscore privacy rating measures how well an organization protects privacy. The platform lets you easily check the privacy rating of any business or vendor you interact with, giving you peace of mind and full insights into data practices around the world.
In today's digital age, privacy is becoming increasingly important. With privacy regulations such as GDPR, CPRA, CCPA, and LGPD, it's critical to ensure that companies take privacy seriously when handling users' information. With Ubiscore, you can hold organizations accountable and ensure never-before-seen due diligence in protecting sensitive data. So, why leave privacy to chance? Take control and check your business' and your competitor’s privacy score now!
--
✅ Before you go:
Advance your career: join the waitlist for my new courses on Privacy & AI and Privacy UX and get a 20% discount when they launch.
Spread privacy awareness: there are 65,000+ people following us on various platforms. Forward this newsletter to your friends and help us reach 100,000.
Check out our free privacy content on our archive, podcast, Twitter, LinkedIn & YouTube.
See you next week. All the best, Luiza Jarovsky