AI Hallucinations & Privacy: A Reputational Harm Nightmare
Continuing the previous discussions on the privacy consequences of AI applications, today I will talk about how conversational AI tools such as ChatGPT can affect privacy by causing reputational harm.
Reputational harms, in the privacy context, according to Profs. Solove & Citron:
"impair a person’s ability to maintain 'personal esteem in the eyes of others' and can taint a person’s image. They can result in lost business, employment, or social rejection."
If a journalist publishes malicious lies about you in a newspaper, there will be reputational harm; if a chatbot, when prompted to answer "who is Luiza Jarovsky," writes up fantasized information about me, there can potentially be reputational harm, depending on the content that is output by the chatbot.
Isn't everybody saying that these AI-based chatbots are great productivity tools that will transform the way we work and live? Can they "invent" information?
Yes, they can.
AI hallucination is a term used to refer to cases when an AI tool gives an answer that is known by humans to be false. According to an article by Satyen Bordoloi,
"AI hallucinations occur in various forms and can be visual, auditory or other sensory experiences and be caused by a variety of factors like errors in the data used to train the system or wrong classification and labelling of the data, errors in its programming, inadequate training or the systems inability to correctly interpret the information it is receiving or the output it is being asked to give."
When dealing with AI-based chatbots like ChatGPT, these hallucinations happen every day to millions of users that are using it.
In an interview to Datanami, Peter Relan, co-founder of Got It AI, a company that develops AI solutions, said: "the hallucination rate for ChatGPT is 15% to 20% (...) so 80% of the time, it does well, and 20% of the time, it makes up stuff.”
One of Relan's company's AI products is a "truth-checker," a tool trained to detect when ChatGPT (or other large language models) are hallucinating. He said that his truth checker is 90% accurate. So if we do the maths, 2% of what AI based-chatbots say will be hallucinations and will not be detected by this truth-checker.
According to Relan, OpenAI (the company behind ChatGPT) and other AI developers can make efforts to reduce the hallucination rate, but "the hallucination problem will never fully go away with conversational AI systems."
If we are talking about privacy and reputational harm, the hallucinations that matter are those that affect individuals.
In her article for MIT Technology Review, Melissa Heikkilä, mentions the case involving BlenderBot (Meta's chatbot demo for research purposes) and Maria Renske “Marietje” Schaake. She is a Dutch politician, a former member of the European Parliament, and now the international policy director at Stanford University’s Cyber Policy Center and an international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. BlenderBot said that she was a terrorist, directly accusing her without prompting. According to Melissa Heikkilä, the probable origin of this hallucination was an op-ed Schaake wrote to the Washington Post where the words “terrorism” or “terror” appeared three times.
Another example is what happened to Diogo Cortiz, a cognitive scientist and futurist. He recounts that he asked ChatGPT to tell the names of the books written by his Ph.D. supervisor, Prof. Lúcia Santaella, a semioticist well-known in her field. The chatbot answered with a list containing five titles. He found it strange, as he knew she had written more than 40 books. After checking, he realized that none of the books listed existed. All of them had words common to the professor's field (and maybe she could have written those books), but they were hallucinations.
Others have been posting examples of hallucinations online, and if the hallucination rate is indeed 15-20%, they are happening all the time, millions of them daily.
When talking specifically about personal data and reputational harm: people - especially journalists, public speakers, and anyone with a strong online presence - are having lies/distortions about their lives being output by AI chatbots. And they have no idea in what contexts this information is being output, or how to delete or edit this false information about them. Core data protection rights are the right to erasure and the right to rectification. How are these rights being applied in the context of AI chatbots?
When we use search to look for a person online, we have third-party sources to choose from, linked by the search engine. And we can be critical and select the source we consider legitimate, we can compare sources. If people start using conversational AI systems for everything, including fact-finding, the chatbot has only one answer: "the truth," as if we were worshiping an oracle.
In this context, AI chatbots' potential for reputational harm is immense. Nearly 1/5 of the prompts' answers, stories, bios, and facts about people will be hallucinations. What I know is that there will be plenty of work for lawyers.
💡 I would love to hear your opinion. I will share this article on Twitter and on LinkedIn, you are welcome to join the discussion there or send me a private message.
-
🎓 Our specialized privacy courses
April cohort: Privacy-Enhancing Design: The Anti-Dark Patterns Framework (4 weeks, 1 session per week). Register now using the coupon TPW-10-OFF and get 10% off.
May cohort: Privacy & AI: Regulation, Challenges, and Perspectives. Join the waitlist.
June cohort: Privacy-Aware Parenting. Join the waitlist.
To learn more, visit: implementprivacy.com/courses
-
📅 Our upcoming privacy event: Women Advancing Privacy
870+ participants confirmed! February 23, global & remote on LinkedIn Live. Register here.
-
🔁 Trending on social media
Privacy & AI Intersections. See the full thread here.
-
📢 Privacy solutions for businesses (sponsored)
Discover Yes We Trust, the privacy hub to stay up to date on industry news, gain insight from experts and connect with other privacy-minded professionals. Align your privacy strategy with your company business goals by attending our webinars and in-person events and leveraging our blog and private Linkedin group.
Watch the replay of the first Yes We Trust webinar! We gathered several data privacy experts, including Luiza Jarovsky and Stéphane Hamel, to discuss the state of data privacy in 2023. They shared feedback and thoughts around Privacy & UX, Analytics & Compliance and the GDPR as a global standard. This webinar is a must if you’re looking for the main privacy topics to watch for in 2023. Watch it on-demand here!
-
📌 Privacy & data protection careers
We have gathered relevant links from large job search platforms and additional privacy jobs-related info on this Privacy Careers page. We suggest you bookmark it and check it periodically for new openings. Wishing you the best of luck!
-
✅ Before you go:
Did someone forward this article to you? Subscribe to this newsletter and receive this weekly newsletter in your email.
For more privacy-related content, check out our Podcast and my Twitter, LinkedIn & YouTube accounts.
At Implement Privacy, I offer specialized privacy courses to help you advance your career. I invite you to check them out and get in touch if you have any questions.
See you next week. All the best, Luiza Jarovsky