👋 Hi, Luiza Jarovsky here. Read about my work, invite me to speak, tell me what you've been working on, or just say hi here.
🔥 How AI influences surveillance capitalism
I created the infographic above based on two theoretical models:
- Prof. Shoshana Zuboff's model of surveillance capitalism (detailed in her book "The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power");
- Prof Laurence Lessig's four modalities of regulation (detailed in his book "Code and Other Laws of Cyberspace");
Prof. Zuboff defined surveillance capitalism as “the unilateral claiming of private human experience as free raw material for translation into behavioral data. These data are then computed and packaged as prediction products and sold into behavioral futures markets — business customers with a commercial interest in knowing what we will do now, soon, and later.”
And according to Prof. Lessig: “four constraints regulate this pathetic dot – the law, social norms, the market, and architecture - and the ‘regulation’ of this dot is the sum of these four constraints. Changes in any one will affect the regulation of the whole. Some constraints will support others; some may undermine others. Thus, ‘changes in technology [may] usher in changes in . . . norms,’ and the other way around. A complete view, therefore, must consider these four modalities together.” (p. 123)
In the infographic above, I mapped the concept of surveillance capitalism according to Lessig's four modalities: market, architecture, social norms, and laws. This is what Lessig said about my model on Twitter.
If you read carefully, you will see that today, all four modalities are heavily influenced by AI systems and AI-related practices. AI became an essential pillar in sustaining surveillance capitalism as we know it.
Perhaps new laws and regulatory frameworks (such as the AI Act in the EU and heavier FTC enforcement in the US) will change the forces as they appear in the picture below. But not yet.
🔥 The FTC investigates OpenAI
The Washington Post disclosed a letter from the FTC to OpenAI, as part of an ongoing investigation of the company.
The subject of the investigation is to understand if OpenAI has engaged in:
a) unfair or deceptive privacy or data security practices; or
b) unfair or deceptive privacy practices relating to risk or harm to consumers, including reputational harm;
the FTC also wants to understand if obtaining monetary relief would be in the public interest.
As part of this investigation, the FTC is asking dozens of questions to OpenAI, including:
- detailed inquiries about model development and training;
- how they obtained the data;
- all sources of data, including third parties that provided datasets;
- how the company assesses and addresses risks;
- privacy and prompt injections risks and mitigations;
- monitoring, collection, use, and retention of personal information.
The FTC is also requesting various documents (see pages 17-20).
As a result of this investigation, we will have a better picture of the connection between AI-related practices and the FTC's perspective on unfair and deceptive practices, as well as the connection between privacy and AI issues from an FTC regulatory point of view.
According to Marc Rotenberg, founder of the Center for AI and Digital Policy (CAIDP), they filed the initial complaint in March and spent the last months advocating for the investigation. He added that “the United States lags behind other countries in AI policy. In March, CAIDP President Merve Hickok told Congress ‘the US lacks necessary guardrails for AI products.’ The FTC investigation of OpenAI is now the best opportunity to put these safeguards in place.”
Sam Altman, OpenAI's CEO, wrote about the investigation on Twitter: “it is very disappointing to see the FTC's request start with a leak and does not help build trust. that said, it’s super important to us that our technology is safe and pro-consumer, and we are confident we follow the law. of course we will work with the FTC.”
On the topic, I have been discussing OpenAI and its privacy practices in previous articles, such as this: OpenAI's Unacceptable 'Privacy by Pressure' Approach.
The unfolding of this investigation will be extremely interesting to both privacy & AI professionals and will probably have regulatory repercussions in other parts of the world.
Privacy & AI is also the topic of my upcoming masterclass. If you want to dive deeper into risks, challenges, and regulation, register here.
🔥 Movie recommendation: Coded Bias
Coded bias, a documentary featuring Dr. Joy Buolamwini and other experts, is available on Netflix - super recommended, watch it here. To have an overview of Dr. Buolamwini's research, watch the video about her project Gender Shades, and check out her non-profit Algorithmic Justice League, which leads the movement for equitable and accountable AI.
🔥 Transparency, usability & privacy policies. Case study: Grammarly
This week, I discuss transparency obligations, privacy policies, some of Grammarly's privacy practices, and what companies are doing wrong: