Hi, Luiza Jarovsky here. Welcome to our 181st edition, read by 56,300+ subscribers worldwide. This is a paid subscriber-only edition featuring my weekly critical analysis of emerging challenges in AI governance. It's great to have you on board!
👉 The paid editions are a must-read for anyone who wants to upskill, stay ahead, and understand what's going on in AI from legal and ethical perspectives.
For more: AI Book Club | Live Talks | Job Board | Learning Center | Training Program
☁️ How the AI Hype Clouds Privacy Risks
Many have been calling out the massive AI hype we've seen in the last few years, including Gary Marcus (our talk here), Alex Hanna and Emily M. Bender (our talk here), and Yann LeCun.
AI hype can take many forms, including exaggerated declarations by tech executives, biased reports, product announcements that overstate the technology’s capabilities, and misleading statements that, depending on the context, could be unlawful and result in fines.
The hype has become so extreme that some predictions are actually laughable:
AI hype can also take the form of press releases and announcements that strategically use wording, language, visuals, and effects to make users believe that a particular AI-powered technology or feature is:
“superhuman”
flawless
risk-free
In today's edition, I discuss how this kind of exaggerated marketing strategy, widely used in the field of AI, can obscure privacy risks and undermine years of progress in privacy and data protection.
*
Two days ago, OpenAI announced that Booking(.com) is integrating its data systems with OpenAI’s LLMs to “personalize travel at scale.”
The announcement, with a video and the article detailing the partnership, is a clear example of how the AI hype can negatively impact privacy, including people's understanding of how the technology works, the privacy risks involved, and people's privacy concerns.
Before I continue, I invite you to critically watch the 2:40 minute video below in which OpenAI and Booking announce their partnership:
Most people who watch this video and read more about the partnership will stop at the stage of amazement.
They will likely appreciate how far AI has come and be interested in testing Booking's AI tools.
However, essential privacy-related information is buried beneath the hyped marketing, and most people will overlook it. Some examples:
Business Model:
The video begins with the statement, “travel is hard,” seemingly referring to unforeseeable events that might disrupt trips and cause difficulties.
The focus then quickly shifts to some of Booking's offerings, which include over 29 million listings across accommodation, flights, transportation, and things to do. The chief technology officer refers to this as “the connected trip.”
Booking's business model relies heavily on data analytics, personal data, and understanding consumer journeys. The more data they collect, the longer people stay on their platform, and the more features they use, the greater their profit.
This is what “the connected trip” is about and why they are partnering with OpenAI in the first place: Booking wants more personal data. People should have that in mind throughout the video and look critically at any feature that might increase privacy risks.
Data Collection
At 0:33 seconds, the senior director of product marketplace states that when they introduced the AI trip planner, the most exciting moment was to open a free text box (LLM-powered) and “have that firehose of intent come at us.”
Behind this metaphorical statement is the idea that,