⛔ A new information age is born
The latest developments in AI policy & regulation | Luiza's Newsletter #106
👋 Hi, Luiza Jarovsky here. Welcome to the 106th edition of this newsletter on AI policy & regulation, read by 25,600+ subscribers in 135+ countries. I hope you enjoy reading it as much as I enjoy writing it.
⏰ Last call: the 4-week EU AI Act Bootcamp we are offering at the AI, Tech & Privacy Academy starts tomorrow. Don't miss it!
👉 A special thanks to MineOS for sponsoring this week's free edition of the newsletter. Check out their guide:
American data privacy is evolving rapidly, with 7 states passing new regulations in 2024 alone. Now, over 50% of the US population is covered by these laws. Each of them adds complexity to the privacy landscape, making it essential to understand the legal requirements of all states for compliance. Read the MineOS guide to prepare your business for the future of data privacy in the US.
⛔ A new information age is born
The internet is changing, and AI is leading us to a new information age where knowledge production, consumption, and control are being reshaped. We might not like how it ends. I summarize it below:
➡️ When the commercial internet started, anyone could create a website or a blog. A few years later, anyone could have a voice and broadcast on social media or video platforms. Those two aspects were important factors in the entrepreneurial boom that followed. They were also a democratizing factor:
➵ You did not need massive investment to start an online business;
➵ Through blogs and social media, suddenly, anyone could have a voice and spread it worldwide. In this sense, it was a positive disruption from the pre-internet era, when newspapers, radio, television, and major studios had almost complete control over information.
➡️ The current AI wave takes the opposite direction:
➵ It's immensely expensive to train a powerful AI model. As Sam Altman said: "It's hopeless to compete with OpenAI." Small businesses have no chance.
➵ Established big tech companies are dominating the current AI wave: Google, Microsoft, Amazon, Meta, Nvidia, and so on. Smaller businesses either need massive investment or a productive deal with a big player. Truly small businesses cannot compete in the current AI wave.
➵ Big tech is embedding AI into every product. Embedding AI means that we have less agency and less voice, as AI systems "will do the legwork for us" (using Google's words). What tech calls “legwork” also means the intellectual, creative, and artistic work.
➵ Whatever we post online is being used to train AI, which will result in more power and profit for the tech companies that own the most powerful AI models.
➵ Search engines, which represented a big revolution for small businesses, creators, and anyone with a website, are now being replaced by "AI-powered search." This is a massive twist towards less diversification and the monopolization of ideas by large AI companies. Their AI system will be the "oracle," telling everyone what to know, think, and do. The owner of the oracle will have unimaginable power over human ideas and knowledge.
➡️ The concentration of money and power in the hands of a few tech companies has existed since the beginning of the internet. What is fundamentally different this time is their AI-powered ability to control ideas, knowledge, and decision-making.
➡️ Mainstream AI systems will soon shape how we think and behave. Their built-in features and rankings, designed by a few leading tech engineers, will shape the ideas that will be considered true and influence the masses, and the ideas that will be forgotten or forcibly deleted.
➡️ While doing the intellectual "legwork" for us, they will also be replacing human decision-making, ideas, and agency, building up immense centers of knowledge control.
➡️ The AI-powered internet will change society forever by disrupting how we acquire knowledge, what type of information we consume, and how we produce and share information. I'm sincerely concerned.
💡To learn more about some of the legal and ethical challenges of the new information age, check out our 4-week Bootcamps in AI, Tech, and Privacy.
👾 Personalized AI deepfakes
Dystopia: having our own LLMs so that our AI deepfakes can live our lives for us. Here's what Zoom's CEO said:
"Let’s say the team is waiting for the CEO to make a decision or maybe some meaningful conversation, my digital twin really can represent me and also can be part of the decision making process. We’re not there yet, but that’s a reason why there’s limitations in today’s LLMs. Everyone shares the same LLM. It doesn’t make any sense. I should have my own LLM — Eric’s LLM, Nilay’s LLM. All of us, we will have our own LLM. Essentially, that’s the foundation for the digital twin. Then I can count on my digital twin. Sometimes I want to join, so I join. If I do not want to join, I can send a digital twin to join. That’s the future."
📚 AI Book Club announcement
The chosen book for June 2024 is "Guardrails: Guiding Human Decisions in the Age of AI" by Urs Gasser & Viktor Mayer-Schoenberger. Selected quotes:
"Guardrails for decision variability ought to address such institutional limitations. For instance, instead of mostly emphasizing the acquisition of knowledge in formal education, we could also aim to train our skills to imagine better (or at least additional) decision options. More generally, we could avoid talking about solving problems as if there is only one valid solution - and rather underline that a variety of decision options exist, each with its own pros and cons.” (page 78)
"Linking the use of technical tools with social structures and mechanisms necessitates a sharp understanding of what tools can and cannot achieve. It requires a clear definition of the interface between technology and social structures, including resolving questions of agency and control. It also entails a sufficiently defined focus for technical tools to deliver what is expected from them. This means tools must be well understood and evaluated as fit for purpose. And we must equally expect a clarity of design and process from the social structures that make up the system of guardrails, as well as a functioning interface. More challenging but also potentially more useful would be socio-technical setups that are capable of iteration, adaption, and learning - so that the system of guardrails not only helps individual decision-makers to improve but also the system itself to evolve and progress as experience accrues and contexts become clearer." (page 180)
"In considering the role of guardrails, we affirm the importance of individual choice and human volition. Guardrails can guide us, but they ought not and cannot decide for us. This is both a blessing and a curse. The latter because we cannot escape responsibility for the trajectory that humanity takes. And the former because without volition we would not have agency. It's the human condition: to decide as individuals yet be anchored in society. The guardrails we wrote about in this book link one with the other - and good guardrails embrace and deepen this link, while appreciating the limitations of all human decisions and the potential for learning, progress, and evolution this entails." (page 190)
➡️ The topic of human decision-making and guardrails does not receive a fraction of the attention it should receive in the context of fast-paced AI development and ubiquitous adoption (sometimes imposition). This book is deep, complex, and a breath of fresh air for curious minds who want to help shape the future of technology and policymaking.
➡️ Reply to this email to let me know if you have read it, are planning to read it, have thought about the topic, or are working on related issues.
➡️ To join our AI Book Club (1000+ members) and receive our monthly book recommendations, register here.
🔥 AI Governance is HIRING
Below are ten AI Governance positions posted last week. Bookmark, share, and be an early applicant:
1. Google: Director, Content and AI Policy, Risk, Compliance, Integrity - apply
2. IBM: Senior Attorney, Privacy + AI - apply
3. Anthropic: Enforcement Lead, Trust & Safety - apply
4. Barclays Bank US: AI Governance and Oversight - apply
5. Visa: Lead System Architect, AI Governance - apply
6. WayUp: AI Governance and Oversight - apply
7. Siemens Energy: AI Governance Consultant - apply
8. Zurich Insurance: AI Governance Expert - apply
9. Canada Life: Director Data & AI Governance - apply
10. Northrop Grumman: AI Governance Systems Engineer - apply
➡️ For more job opportunities in AI governance and privacy, subscribe to our weekly job alert.
➡️ To upskill and land your dream AI governance job, check out our training programs in AI, tech & privacy. Good luck!
💻 On-demand course: Limited-Risk AI Systems
I'm excited to share with you our first on-demand course on the EU AI Act: Limited-Risk AI Systems, where I discuss the EU AI Act's category of limited-risk AI systems, as established in Article 50, including examples and my insights on potential weaknesses.
💡Tip: Paid subscribers get free access to our monthly on-demand courses. Upgrade to paid and access it today.
📋 Model Al Governance Framework for Generative Al
Singapore released the report "Model Al Governance Framework for Generative Al - Fostering a Trusted Ecosystem," and it's a must-read for everyone interested in AI policy & regulation. Important information:
➡️ The report outlines 9 dimensions to "create a trusted environment – one that enables end-users to use Generative AI confidently and safely, while allowing space for cutting-edge innovation." These are the 9 dimensions:
➵ Accountability
➵ Data
➵ Trusted Development and Deployment
➵ Incident Reporting
➵ Testing and Assurance
➵ Security
➵ Content Provenance
➵ Safety and Alignment R&D
➵ AI for Public Good
➡️ Interesting quotes:
"Responsibility can be allocated based on the level of control that each stakeholder has in the generative AI development chain, so that the able party takes necessary action to protect end-users. As a reference, while there may be various stakeholders in the development chain, the cloud industry has built and codified comprehensive shared responsibility models over time. The objective is to ensure overall security of the cloud environment. These models allocate responsibility by explaining the controls and measures that cloud service providers (who provide the base infrastructure layer) and their customers (who host applications on the layer above)respectively undertake." (page 7)
"There is, therefore, a need to work with key parties in the content life cycle, such as working with publishers to support the embedding and display of digital watermarks and provenance details. As most digital content is consumed through social media platforms, browsers, or media outlets, publishers’ support is critical to provide end users with the ability to verify content authenticity across various channels. There is also a need to ensure proper and secure implementation to circumvent bad actors trying to exploit it in any way." (page 25)
"Industry, governments, and educational institutions can work together to redesign jobs and provide upskilling opportunities for workers. As organisations adopt enterprise generative AI solutions, they can also develop dedicated training programmes for their employees. This will enable them to navigate the transitions in their jobs and enjoy the benefits which result from job transformations." (page 30)
🏢 The structure of the AI Office
The EU Commission released more information on the structure of the EU AI Office. Here's what you need to know:
➡️ According to the official page, from June 16th, the organizational setup of the European AI office will consist of 5 units and 2 advisors:
➵ The “Excellence in AI and Robotics” unit
➵ The “Regulation and Compliance” unit
➵ The “AI Safety” unit
➵ The “AI Innovation and Policy Coordination” unit
➵ The “AI for Societal Good” unit
➵ The Lead Scientific Advisor
➵ The Advisor for International Affairs
➡️ The AI Office will employ over 140 staff, including:
➵ Technology Specialists
➵ Administrative Assistants
➵ Lawyers
➵ Policy Specialists
➵ Economists
➡️ The AI Office will recruit people with a variety of backgrounds, and you can sign up to receive their updates.
🔬 New report: Science in the Age of AI
The Royal Society published the report "Science in the Age of AI - How AI is changing the nature and method of scientific research," and it's a must-read for everyone interested in AI & science. Important information:
➡️ According to the official release, the report addresses the following questions:
➵ How are AI-driven technologies transforming the methods and nature of scientific research?
➵ What are the opportunities, limitations, and risks of these technologies for scientific research?
➵ How can relevant stakeholders (governments, universities, industry, research funders, etc) best support the development, adoption, and uses of AI-driven technologies in scientific research?
➡️ Some of the key findings are:
"Beyond landmark cases like AlphaFold, AI applications can be found across all STEM fields, with a concentration in fields such as medicine, materials science, robotics, agriculture, genetics, and computer science. The most prominent AI techniques across STEM fields include artificial neural networks, deep learning, natural language processing and image recognition."
"China contributes approximately 62% of the patent landscape. Within Europe, the UK has the second largest share of AI patents related to life sciences after Germany, with academic institutions such as the University of Oxford, Imperial College, and Cambridge University featuring prominently among the top patent filers in the UK. Companies such as Alphabet, Siemens, IBM, and Samsung appear to exhibit considerable influence across scientific and engineering fields."
"Interdisciplinary collaboration is essential to bridge skill gaps and optimise the benefits of AI in scientific research. By sharing knowledge and skills from each other’s fields, collaboration between AI and domain subject experts (including researchers from the arts, humanities, and social sciences) can help produce more effective and accurate AI models. This is being prevented, however, by siloed research environments and an incentive structure that does not reward interdisciplinary collaboration in terms of contribution towards career progression."
📄 Fascinating AI paper alert
The paper "Promises and pitfalls of artificial intelligence for legal applications" by Sayash Kapoor, Peter Henderson & Arvind Narayanan is a must-read for everyone interested in AI and the legal profession. Quotes:
"Recent instruction-tuned language models (chatbots) cannot necessarily outperform models fine-tuned on law-specific datasets [Chalkidis 2023]. Further, many information-processing tasks can also be carried out by professionals without a law degree. For these reasons, while large language models offer improvements over existing tools — possibly in terms of accuracy but especially in terms of cost, by decreasing the amount of task-specific development required — they do not drastically change legal information processing for experts." (page 3)
"The low accuracy demonstrates that automating judgments from the text of legal cases is hard. This is not surprising: legal outcomes depend on the context and specific of cases, the available documents might not comprise the entirety of the context of the case being adjudicated, and the specific judgment might depend on a specific judge’s (or set of judges’) interpretation of the arguments. In addition, there are significant variability across different jurisdictions, meaning the amount of data that can be used to train AI to automate judgments in any specific jurisdiction is small. Finally, the judgments made over time evolve with changes to the specific judges, the set of past cases comprising precedent, legislation and many other factors." (page 8)
"The use of AI for prediction, whether court decisions or recidivism fundamentally differs from information processing tasks and tasks involving creativity, reasoning, or judgment. They attempt to predict the future without sufficient observability of relevant features and lack data to form a robust model of the world that would allow for accurate predictions. Instead, they rely on extremely rough generalizations and approximations using simple linear models (when the underlying dynamics are far from linear)." (page 10)
😨 The scary side of AI expansion
As AI expansion and integration continue, there are two scary consequences that most people are not aware of. I summarize them in 4 minutes, watch:
🎤 Are you looking for a speaker in AI, tech & privacy?
I would welcome the opportunity to:
➵ Give a talk at your company;
➵ Speak at your event;
➵ Coordinate a training program for your team.
➡️ Get in touch
⏰ Reminder: Upcoming training opportunities
[Last call] The EU AI Act Bootcamp
🗓️ Thursdays, June 6 to 27, 10am PT / 6pm UK time
👉 Register here
Emerging Challenges in AI, Tech & Privacy
🗓️ Wednesdays, July 17 to Aug 7, 10am PT / 6pm UK time
👉 Register here
📩 To receive our AI, Tech & Privacy Academy weekly emails with learning opportunities, subscribe to our Learning Center.
I hope to see you there!
🙏 Thank you for reading!
If you have comments on this week's edition, write to me, and I'll get back to you soon.
To receive the next editions in your email, subscribe here.
Have a great day.
Luiza