👋 Hi, Luiza Jarovsky here. Welcome to the 77th edition of this newsletter, and thank you to 83,000+ followers on various platforms. Read about my work, invite me to speak, tell me what you've been working on, or just say hi here.
⏰ I've moved the newsletter to Tuesdays at 6am ET so that we can start the day motivated and with fresh ideas. Set up the alarm!
✍️ This newsletter is fully written by a human (me), and illustrations are AI-generated. It's a pleasure for me to write it, and your feedback is welcome.
A special thanks to Conformally, this week's newsletter sponsor:
Expand your privacy knowledge and get things done faster with the Privacy Navigator. Quickly find regulations, guidelines, articles, templates and much more. It's open to all and completely free. Check it out.
🏜️ The AI Wild West is here
Elon Musk has just launched Grok, xAI’s first AI system. Here are five things you didn't know about it:
1. According to its privacy policy, if you have a Twitter/X account and log in to Grok, whatever you post is being used to train Grok (Article 2, second bullet). I didn't find any info on opt-out mechanisms;
2. It seems to have lower guardrails. For example, it outputs info on how to produce illegal drugs "for educational purposes" (if you are curious, you can see a screenshot on Elon Musk's X account);
3. Earlier this year, Musk said: "I'm going to start something which I call 'Truth GPT' or a maximum truth-seeking AI that tries to understand the nature of the universe." It looks like the truth-seeking aspect of his first AI system is weaker guardrails, which might make it extremely unsafe for kids and vulnerable populations, besides being prone to bias and harmful outputs;
4. The privacy policy says that users in certain regions have some rights, for example, the right to delete:
"You have a right to ask us to delete any personal information which we are holding about you in certain specific circumstances."
I am curious how they will, in practice, delete personal data already used to train the AI system;
5. In April this year, Musk urged an AI pause of at least 6 months, citing 'risks to society.' However, it seems ironic that 6 months later, his company is launching a new AI system.
The AI Wild West is officially here.
⚖️ Consumer law could rescue the AI debate
Public discussions on technical aspects, capabilities, and applications have dominated the AI regulation debate, which has been led by engineers, AI ethics professionals, and tech CEOs. This debate has also led us to the Wild West we are currently experiencing.
There is, however, a century-old field with strong principles, foundations, and rules that could be of much use but so far has been neglected: consumer protection law.
Consumer law varies by country and region, but overall, the central purpose is to correct information and power asymmetries between companies and consumers and protect consumers, broadly speaking.
As a lawyer, I've always been deeply interested in consumer law. It's a versatile field that, in order to protect consumers, must adapt to different products, services, and contexts that will give rise to various levels of risks and harms.
To all AI enthusiasts: I am sorry to say, but from the point of view of consumer law, AI systems are just another type of product or service, and consumer law's goal will be to keep the consumer safe and empowered. There is no sci-fi or fireworks.
Bringing consumer law to the center of the AI regulation debate would make the latter much more realistic and pragmatic and would bring attention to issues such as:
How people can possibly interact with the AI system, how harm can emerge, and what responsibilities and legal liabilities should the developing company have?
Is it a dangerous AI application? How have similar (= with similar function) non-AI systems been regulated, and what are the mandatory safeguards/warnings in those cases? Should we keep or modify the requirements?
Do people know how to interact with such an AI system? Can there be confusion with other existing systems? What type of mandatory safeguards should be implemented to avoid confusion and help people navigate the new challenges? What should AI developers be obligated to guarantee when they offer their products, especially aiming at avoiding confusion?
How the AI system is designed, marketed, and sold? Do people understand what is going on and how the system impacts them? What type of safeguards should be implemented to help communicate with consumers? According to consumer law principles and rules, what additional measures should be implemented to support consumers? Is any type of additional oversight needed? Maybe it can go together with an existing product/industry oversight system?
In terms of the existing consumer protection framework, how could AI applications benefit from it so that consumers are better protected?
and so on.
We are regulating AI to make it better and safer for individuals and societies, and consumer law/consumer agencies/consumer lawyers have been focusing on that for decades. The AI regulatory framework should be built upon and integrated with existing systems and structures.
I know tech CEOs need to move the PR machine and are making a buzz around AI regulation (see my thread about the topic). However, we don't need to reinvent the wheel: there are products and services, there are people, and people must be protected.
📌 Job opportunities
Looking for a job in privacy? Check out our privacy job board and sign up for the biweekly alert.
🎓 New Masterclasses coming up
Is it an avatar or the real me? Watch the video and let me know.
The future of technology is an open project; let's build it together with privacy, transparency, and fairness. Check out our current Masterclasses (new topics and cohorts coming up in 2024). To bring a Masterclass or Workshop to your company, get in touch.
🏛 President Biden's Executive Order on AI
President Biden issued an Executive Order on safe, secure, and trustworthy AI. Below is some important information and my comments on its privacy shortcomings:
The focus of the Executive Order is to direct the following actions:
a) New Standards for AI Safety and Security
b) Protecting Americans' Privacy
c) Advancing Equity and Civil Rights
d) Standing Up for Consumers, Patients, and Students
e) Supporting Workers
f) Promoting Innovation and Competition
g) Advancing American Leadership Abroad
h) Ensuring Responsible and Effective Government Use of AI
According to the document below, the goal of this Executive Order is to: "ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI)."
As countries realize AI's economic and harm-related potential, they are now attempting to conquer their space (and win) the global AI leadership arena.
The EU began earlier by legislating the AI Act containing stricter rules and prescriptions on what can and cannot be done with AI (not into force yet). Now the US, in its decentralized approach, is advancing its own rule-making path.
Specifically on privacy, they point out that:
"Without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems."
After reading the stated privacy-related priorities, I must say that they are broad and focus on privacy-preserving technologies that should be applied when personal data is being processed. The fact sheet mentions, for example:
"Strengthen privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy, by funding a Research Coordination Network to advance rapid breakthroughs and development."
However, I see no mention of consent, legitimacy, fairness, or any aspect that would improve people's control and choice over how their personal data is used to train commercially available AI systems.
In this context, it's interesting to notice that there is also no mention of measures against copyright infringement or the protection of authors and artists whose work is unlawfully used to train AI systems.
There is still a lot to work on.
📖 Join our AI Book Club
We are now reading “Atlas of AI” by Kate Crawford, and the next AI book club meeting will be on December 14, with six book commentators. To participate, register here.
🖥️ Privacy & AI in-depth
On November 28, I will talk with Prof. Ryan Calo about Humans, Robots, and Vulnerability in the Age of AI. To join the session, register here. Every month, I host a live conversation with a global expert - I've spoken with Max Schrems, Dr. Ann Cavoukian, Prof. Daniel Solove, and various others. Access the recordings on my YouTube channel or podcast.
⛔ Algorithmic feeds must be regulated
Unpopular opinion: it's time to regulate algorithmic feeds and optimization for engagement and prioritize chronological feeds on social media. Why?
misinformation, disinformation, and AI-based deepfakes are blooming, and they cause real-world harm;
algorithmic feeds are black boxes and social networks' "secret sauces." They represent social media's content curation, and platforms have basically no accountability over the harm they cause;
algorithmic feeds help push lies and generate ad revenue with the fast-spreading fake content;
chronological feeds are better to help comply with regulations such as the Digital Services Act (DSA), especially regarding transparency.
Specifically regarding optimization for engagement, as I wrote in this week's edition of my newsletter, it will usually mean downsides such as:
using UX dark patterns to make people glued to the screen and obsessed with social validation (likes, comments, shares);
using algorithmic feeds and pushing people to see viral content from people they don't follow, sometimes sensationalist and harmful;
creating a sense of urgency and competition (for engagement) among users;
having a “soft approach” against disinformation, misinformation, and harmful content (so that more content can spread faster);
and so on.
It's a SUPER important topic, especially right now with existing "info wars." With new technologies making lies even more credible, we must build systems that are effective in helping people detect what is true and what is not.
I would like to organize a live session about it to discuss it further, including regulatory possibilities. Would you like to join this session? Would you like to recommend someone to join me in the live discussion?
Thank you for reading! Wishing you a great week.
All the best,