👋 Hi, Luiza Jarovsky here. Welcome to the 88th edition of this newsletter, and thanks to 82,000+ followers on various platforms.
🔥 In every edition, I share my independent critical analyses on privacy, tech & AI (you will not find anything similar online or offline) and resources that help you upskill and advance your career.
🤖 Paid subscribers support my work (thank you!), get exclusive access to the AI Briefing - the most important AI topics of the week - and receive discounts. If you want to get serious about AI, consider a paid subscription.
✍️ This newsletter is fully written by a human (me), and I use AI to create the illustrations. To read about my work and contact me: visit my personal page.
A special thanks to Usercentrics, this edition's sponsor:
Is your website or app serving ads to European users via AdSense, AdMob, or Google Ad Manager? Your monetization may be at risk. Starting in February, Google requires a certified Consent Management Platform (CMP) that integrates with IAB TCF v2.2. Usercentrics CMP for Website and Apps has you covered. Comply with Google publisher requirements before February to protect your ad revenue & maintain user trust. Get started now.
💡 To become a newsletter sponsor and reach thousands of privacy & tech decision-makers: get in touch.
🚨 The Data Protection Manifesto
The field of data protection has grown immensely in recent years from various perspectives:
➞ The number of data protection laws around the world has drastically increased in recent years;
➞ The number of privacy & data protection professionals, as well as the market demand for these professionals, has been continuously growing, despite layoffs in other fields;
➞ Data protection-related topics have been covered by the media on a daily basis;
➞ There have been major fines for big tech, although we can question if enforcement has been effective, as the recent Noyb report shows.
Despite the advancements, as someone who has been researching the field since 2016, it is sometimes frustrating to see that there is still an excessive focus on the letter of the law instead of its spirit.
Trend after trend, hype after hype, it looks like many companies’ culture is to exploit data and the people behind the data as long as it is not too expensive to do so.
Two examples:
➵ Social media: Heavily popular social media platforms have existed at least since 2004, and today, 20 years later, we still see bad design practices, exploitation of children (see the recent €345 million TikTok fine), disinformation, deepfakes, and other issues involving bad data practices.
➵ AI: Major AI companies today are not garage-born startups but mature organizations founded by experienced entrepreneurs fueled by extensive investment rounds. Nevertheless, their privacy & data practices are not priorities; some examples are OpenAI's privacy-by-pressure approach and numerous copyright lawsuits against AI companies.
Why do I think it happens? There are many factors, but an important one is that data protection is just another item in their yearly budget.
High-level decision-makers focus on numbers, aggregates, and metrics but not so much on the people behind the data and the society we are fostering (or harming) with these types of practices.
💡 With that in mind, I would like to introduce today the Data Protection Manifesto, reminding everyone that when we talk about data protection, beyond compliance and the strict letter of the law, there are other key concepts we should be focusing on:
➞ People
➞ Personal data
➞ Technology
➞ Harm
➞ Hype
➞ Technological life cycle
➞ Prevention
➞ Interdisciplinarity
➞ Innovation
➞ Principles
➞ Fundamental rights
➞ Community
💡Derived from these main concepts are 12 basic ideas that every tech professional should be familiar with:
1- Data does not exist without people
2- Depending on the context, anything can be personal data and impact people
3- Technology is not neutral: it's made by people
4- Technology is often fed by personal data, and it can harm people
5- New technologies will come and go, and we must see beyond the hype
6- All phases of technological deployment must respect the law
7- Data harms can be unexpected: prevention and planning are essential
8- Interdisciplinary teams are key: technical, legal, and compliance teams complement each other
9- Innovation and privacy can flourish together
10- Small compromises might lead to big compromises: every principle matters
11- Fundamental rights matter, online and offline
12- Data protection is made by people, for people, and we are stronger together
-
🚨 Here is a visual representation of the manifesto, ready to share with your network.
Data & technology today are an integral part of human society and greatly influence how each of us shapes our identity.
In the context of rethinking data protection, I see AI as an important catalyst: AI might be the buzzword we need to call attention to what is not working.
I will continue discussing and developing the topic in the following weeks - stay tuned.
🛑 AI & cyber threat: alarming assessment
The National Cyber Security Centre in the UK has just published a report on the near-term impact of AI on the cyber threat, and it is alarming.
This authoritative report is an extremely important assessment of what to expect in terms of upcoming AI-led changes to the cyber threat. On pages 2 and 3 of the report, you can find their key judgments:
➳ "AI will almost certainly increase the volume and heighten the impact of cyber attacks over the next two years. However, the impact on the cyber threat will be uneven;
➳ The threat to 2025 comes from evolution and enhancement of existing tactics, techniques and procedures (TTPs).
➳ All types of cyber threat actor – state and non-state, skilled and less skilled – are already using AI, to varying degrees.
➳ AI provides capability uplift in reconnaissance and social engineering, almost certainly making both more effective, efficient, and harder to detect.
➳ More sophisticated uses of AI in cyber operations are highly likely to be restricted to threat actors with access to quality training data, significant expertise (in both AI and cyber), and resources. More advanced uses are unlikely to be realised before 2025.
➳ AI will almost certainly make cyber attacks against the UK more impactful because threat actors will be able to analyse exfiltrated data faster and more effectively, and use it to train AI models.
➳ AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years.
➳ Moving towards 2025 and beyond, commoditisation of AI-enabled capability in criminal and commercial markets will almost certainly make improved capability available to cyber crime and state actors."
-
People sometimes forget that AI goes much beyond large language models (LLMs) and generative AI and that malicious actors probably already have cheap and quick access to AI-based tools to commit cybercrime.
When thinking about AI regulation and AI policies, make sure to consider these types of threats, too - which might demand a different set of legal tools and coded guardrails.
➡️ Privacy & AI talk
I would welcome the opportunity to give a talk at your company and support your privacy & AI training efforts. Some of my expertise topics are (1) dark patterns in privacy, (2) privacy UX, (3) privacy-enhancing design, (4) legal issues in AI, (5) privacy & AI, (6) dark patterns in AI, and (7) AI-based manipulation. Get in touch.
📚 AI Book Club: Unmasking AI
Interested in AI? Love reading and would like to read more? Our AI Book Club is for you! There are already 630+ people registered. Here's how it works:
We are currently reading “Unmasking AI,” written by Dr. Joy Buolamwini, and we'll meet on March 14 at 2pm ET to discuss it. Five book commentators will share their perspectives, and everybody is welcome to join the 1-hour discussion.
💡 Interested? Register here, invite friends, and start reading!
💛 Enjoying the newsletter? Refer 2 friends and earn a complimentary paid subscription:
➡️ The current state of the AI wave
AI was a central topic at this year's World Economic Forum in Davos. This is what some of the top AI executives said - and my thoughts on the current state of the AI wave:
➵ “It will change the world much less than we all think, and it will change jobs much less than we all think,” Sam Altman - CEO, OpenAI
➵ “The biggest lesson learned is we have to take the unintended consequences of any new technology along with all the benefits and think about them simultaneously – as opposed to waiting for the unintended consequences to show up and then address them,” - Satya Nadella - CEO, Microsoft
➵ "AGI is a super vaguely defined term. If we just term it as ‘better than humans at pretty much whatever humans can do,’ I agree, it’s going to be pretty soon that we can get systems that do that" Aidan Gomez - CEO, Cohere
➵ “We’re already seeing areas where AI has the ability to unlock our understanding ... where humans haven’t been able to make that type of progress. So it’s AI in partnership with the human, or as a tool,” Lila Ibrahim - COO, Google DeepMind
➵ “We have to also turn to those regulators and say, ’Hey, if you look at social media over the last decade, it’s been kind of a f---ing s--- show. It’s pretty bad. We don’t want that in our AI industry. We want to have a good healthy partnership with these moderators and with these regulators.” Marc Benioff - CEO, Salesforce
➵ “This year, we’ll see a ‘ChatGPT’ moment for embodied AI humanoid robots right, this year 2024, and then 2025,” Jack Hidary - CEO, SandboxAQ
-
From the quotes above, other reports from the event, and daily AI-related news, we see that there is still a lot of speculation about AI's capabilities, what it can achieve, what it means in practice, and how factually 'revolutionary' it is.
But we shouldn't let this uncertainty, or perhaps 'hype,' take away the importance of the economic, normative, and social impact that AI is already causing. Some examples are:
→ Billions in investment in creating new AI-based tools and services, as well as optimizing old tools with AI-based functionalities;
→ Tech layoffs being justified on more efficiency or "AI optimization" (whether this is true or not) and impacting many thousands of people worldwide;
→ Millions of people around the world are interested in learning more, upskilling, and understanding how they can be part of this upcoming AI wave and benefit from it;
→ Countless professionals around the world are currently adapting to emerging AI challenges which are affecting their daily work;
→ Dozens of countries are working on AI legislation & and AI-related policies and safeguards;
→ Etc.
The AI wave is real and impactful - at least in the economic, social, and normative sense, and we should all be open to learning, adapting, and growing with it.
Those who ignore it will have to catch up later.
*I took the quotes above from the CNBC and the WEF reports on this year's annual meeting in Davos.
🎤 Tackling Dark Patterns & Online Manipulation in 2024
A special thanks to MineOS, the sponsor of this live talk & podcast episode:
I invited two globally acknowledged scholars - Prof. Cristiana Santos and Prof. Woodrow Hartzog - to discuss with me their perspectives on dark patterns (deceptive design) and online manipulation. We'll talk about:
→ The past, present, and future of dark patterns
→ Laws against dark patterns and the challenges of regulating design
→ Dark patterns in code
→ The challenges of identifying, documenting, and curbing online manipulation
→ Deepfakes, anthropomorphism, and other forms of AI-related manipulation (which I call dark patterns in AI)
💡 Register here so that you will: receive a notification when the session is about to start, be able to join us live and comment in the live chat, and receive a recording of the session in your email to re-watch later. I hope to see you there next week!
💡 To become a sponsor of our live talks & podcast episodes and reach thousands of privacy & tech decision-makers: get in touch.
🤖 Job opportunities in AI governance
AI governance is hiring, and here are some of the hottest positions:
→ Google is hiring a "Manager, AI Policy, Government Affairs and Public Policy" in Washington D.C. or California;
→ Meta is hiring an "AI Policy & Governance Manager" in the UK;
→ Nvidia is hiring an "AI and Data Governance Legal Counsel" in California;
→ Canva is hiring a "Head of Global AI Policy" in Australia;
→ Dataiku is hiring a "Solutions Consultant, AI Governance" in New York.
For hundreds of open positions and to receive our weekly email with curated job openings, check out our privacy job board and AI job board.
💡 To promote your organization's job openings, get in touch.
🦋 Privacy, Tech & AI Bootcamp
Do you want to learn with peers, accelerate your career, and increase your leadership impact? Join our 4-week Privacy, Tech & AI Bootcamp and obtain tools to deal with emerging challenges in the intersection of privacy & AI. You'll receive 8 CPE credits pre-approved by the IAPP and a certificate. Check out the program and register for one of the February cohorts here (limited seats).
🎉 The EU Data Act is here
The EU Data Act entered into force this month, and those working with data protection & AI should be familiar with it. Here's what you need to know:
Context
The Data Act is part of the "European Strategy for Data," which focuses on "putting people first in developing technology, and defending and promoting European values and rights in the digital world."
According to the European Commission's website: "The Data Act will ensure fairness in the digital environment by clarifying who can create value from data and under which conditions. It will also stimulate a competitive and innovative data market by unlocking industrial data, and by providing legal clarity as regards the use of data."
What's behind the Data Act is the idea that recent years have generated enormous amounts of data, including from IoT (internet of things) devices, so its aim is to (a) encourage the use of this data; and (b) make sure it respects applicable EU rules.
Macro movements
I think it's particularly important to pay attention to those macro movements (i.e. regulating data broadly speaking), especially in the context of AI and geopolitical efforts to control its development.
The EU, since the entry into force of the GDPR in 2018, has positioned itself as a leader in 'tech regulation'. Not only do they want to dictate the rules, but they want to be the first to do it broadly, setting the tone and the scope.
This positioning has been quite successful. The GDPR generated waves of regulatory change around the globe, and dozens of countries drafted their internal rules inspired by the GDPR's provisions.
Key stipulations
According to the EU Commission website, these are some of the Data Act's key stipulations:
→ "Increasing legal certainty for companies and consumers engaged in data generation, particularly within the Internet-of-Things framework; (...)
→ Mitigating the abuse of contractual imbalances that impede equitable data sharing; (...)
→ Rules enabling public sector bodies to access and use data held by the private sector for specific public interest purposes; (...)
→ New rules setting the framework for customers to effectively switch between different providers of data-processing services to unlock the EU cloud market. (...)"
Takeaways
→ Pay attention: the Data Act is much more about "fair data sharing" than "data protection" in the strict sense we have in the GDPR. Data protection and tech regulation are evolving fast.
→ The EU regulatory tech landscape is getting more and more complex: GDPR, DSA, DMA, Data Act, Data Governance Act, AI Act (hopefully soon), and so on. Privacy and AI professionals will have to learn to coordinate complex compliance scenarios.
Read the official text of the Data Act here.
🔮 Max Schrems’ view on tech regulation
Most people know Max Schrems for Schrems I & II and his tireless advocacy at Noyb. However, he is also a visionary thinker.
With the AI Act approaching (and everybody discussing the leaked text), you might want to watch or listen to our 1h talk, especially the short clip above, about some of the challenges of tech regulation. Don't miss it!
🔥 NEW! AI Briefing
Are you getting serious about AI? Do you want to be on top of current trends and the latest news? Our AI Briefing can help!
Here are the most important AI topics of the week: