👋 Hi, Luiza Jarovsky here. Welcome to the 127th edition of this newsletter with the latest developments in AI policy, compliance & regulation, read by 33,700+ subscribers in 145+ countries. I hope you enjoy reading it as much as I enjoy writing it.
💎 In this week's AI Governance Professional Edition, I discuss the AI Act's mechanisms for submitting complaints and how they can be understood in a broader governance context. Paid subscribers received it earlier today and can access it here: 📥 AI Act: Submitting Complaints. If you're not a paid subscriber yet, choose a paid subscription and gain access to my weekly exclusive analyses on AI compliance and regulation.
🚀 Accelerate your career in September: our 4-week Bootcamps start this week! 900+ people have already joined our training programs. Join the AI Governance Package and get 20% off. 👉 Register here.
👉 A special thanks to MineOS for sponsoring this week's free edition of the newsletter. Read their article:
Data subject requests are rising year over year and becoming a common occurrence for businesses. However, growing technological challenges and regulatory requirements from laws such as California’s Delete Act are reshaping the approach to managing DSRs. Explore the latest nuances and strategies for handling DSRs in this article from MineOS.
🩺 AI in Healthcare: Risks & Challenges
There have been interesting AI-led advancements in healthcare, as well as discussions on related risks & compliance challenges.
➡️ Privacy is one of the challenges. According to this article:
“The integration of AI in healthcare brings forth complex privacy challenges, particularly in the realms of data control and usage by private corporations. As AI systems increasingly handle sensitive patient data, concerns regarding the stewardship of this information by private entities become paramount. Establishing transparent and accountable data governance frameworks that respect patient privacy and agency is essential to addressing the potential for misuse or unauthorized exploitation of health data. These challenges underscore the need for a delicate balance between technological advancement and the protection of patient rights.”
➡️ Regarding privacy risk, this other article highlighted:
“AI could use outside information to reidentify an anonymized patient in different contexts—a violation of the Health Insurance Portability and Accountability Act (HIPAA) and a huge risk for providers.”
➡️ Among additional challenges, according to this paper, are:
➵ “Liability concerns;
➵ Risk classification challenges;
➵ Detecting and managing cybersecurity vulnerabilities;
➵ Interaction between new medical devices and legacy components;
➵ Assessing and communicating the transparency and explainability;
➵ Understanding and assessing types of bias;
➵ Responsible and accountable data management across the lifecycle of a medical device.”
Below are 10 great resources to learn more. Download, read & share:
1️⃣ "AI and the Future of Healthcare"
✏️ Berkeley Research Group
🔎 Read it here.
2️⃣ “Balancing Privacy and Progress: A Review of Privacy Challenges, Systemic Oversight, and Patient Perceptions in AI-Driven Healthcare”
✏️ Steven M. Williamson & Victor Prybutok
🔎 Read it here.
3️⃣ "The Future of Medical Device Regulation and Standards: Dealing with Critical Challenges for Connected, Intelligent Medical Devices"
✏️ Andrew Mkwashi & Irina Brass
🔎 Read it here.
4️⃣ "AI in health care: hope or hype?"
✏️ The Health Foundation/John Bell & Axel Heitmueller
🔎 Listen to it here.
5️⃣ "Scaling Smart Solutions with AI in Health: Unlocking Impact on High-Potential Use Cases"
✏️ World Economic Forum (Daniel Reiss, Antonio Spina)
🔎 Read it here.
6️⃣ “EU Regulation of Artificial Intelligence: Challenges for Patients’ Rights”
✏️ Hannah van Kolfschooten
🔎 Read it here.
7️⃣ "From ‘AI to Law’ in Healthcare: The Proliferation of Global Guidelines in a Void of Legal Uncertainty"
✏️ Barry Solaiman
🔎 Read it here.
8️⃣ "Patient-First Health with Generative AI: Reshaping the Care Experience"
✏️ World Economic Forum (Daniel Reiss, Antonio Spina)
🔎 Read it here.
9️⃣ Effective Integration of Artificial Intelligence in Medical Education: Practical Tips and Actionable Insights
✏️ Manuel B. Garcia, Yunifa Miftachul Arif, Zuheir N. Khlaif, Meina Zhu, Rui Almeida, Raquel Simões de Almeida & Ken Masters
🔎 Read it here.
🔟 "Data Governance in AI - Enabled Healthcare Systems: A Case of the Project Nightingale"
✏️ Aisha Temitope Arigbabu, Oluwaseun Oladeji Olaniyi, Chinasa Susan Adigwe, Olubukola Omolara Adebiyi & Samson Abidemi Ajayi
🔎 Read it here.
♻️ If you have friends and colleagues working in healthcare, share these articles with them and help spread awareness of AI-related risks and challenges.
📑 [AI Research] “The AI-Copyright Trap”
The article "The AI-Copyright Trap," by Carys Craig, is an excellent read for everyone interested in AI governance and those following the various AI copyright lawsuits I have been covering. Here are some quotes:
"There are many good reasons to be concerned about the rise of generative AI(...). Unfortunately, there are also many good reasons to be concerned about copyright’s growing prevalence in the policy discourse around AI’s regulation. Insisting that copyright protects an exclusive right to use materials for text and data mining practices (whether for informational analysis or machine learning to train generative AI models) is likely to do more harm than good. As many others have explained, imposing copyright constraints will certainly limit competition in the AI industry, creating cost-prohibitive barriers to quality data and ensuring that only the most powerful players have the means to build the best AI tools (provoking all of the usual monopoly concerns that accompany this kind of market reality but arguably on a greater scale than ever before). It will not, however, prevent the continued development and widespread use of generative AI."
"(...) As Michal Shur-Ofry has explained, the technical traits of generative AI already mean that its outputs will tend towards the dominant, likely reflecting 'a relatively narrow, mainstream view, prioritizing the popular and conventional over diverse contents and narratives.' Perhaps, then, if the political goal is to push for equality, participation, and representation in the AI age, critics’ demands should focus not on exclusivity but inclusivity. If we want to encourage the development of ethical and responsible AI, maybe we should be asking what kind of material and training data must be included in the inputs and outputs of AI to advance that goal. Certainly, relying on copyright and the market to dictate what is in and what is out is unlikely to advance a public interest or equality-oriented agenda."
"If copyright is not the solution, however, it might reasonably be asked: what is? The first step to answering that question—to producing a purposively sound prescription and evidence-based prognosis, is to correctly diagnose the problem. If, as I have argued, the problem is not that AI models are being trained on copyright works without their owners’ consent, then requiring copyright owners’ consent and/or compensation for the use of their work in AI-training datasets is not the appropriate solution. (...)If the only real copyright problem is that the outputs of generative AI may be substantially similar to specific human-authored and copyright-protected works, then copyright law as we know it already provides the solution."
👉 Read the full paper here.
🎙️ [AI Live Talks] Conversation with Max Schrems
If you are interested in the intersection of privacy and AI, don't miss my live talk with Max Schrems (our second one!). Register now. Here's why you should join us live:
➵ If you have been reading this newsletter for some time, you know that my view is that common AI practices - which became ubiquitous in the current Generative AI wave - are unlawful from a GDPR perspective. Yet, we have not seen a clear response from data protection authorities.
➵ Max - the Chairman of noyb and one of the world's leading privacy advocates - has been a tireless advocate for privacy rights. More recently, he and his team have also been pioneers in defending privacy rights in the context of AI-related new risks and challenges.
➵ In this live talk, we'll discuss noyb's recent legal actions in this area, including their complaints against Meta & X/Twitter, legitimate interest in the context of AI, and more.
➵ This is my second talk with Max. The first one, in which we discussed GDPR enforcement challenges, was watched by thousands of people (live and on-demand). You can find it here.
👉 If you are interested in privacy and AI, or if you work in AI policy, compliance & regulation, you can't miss it. To participate, register here.
📑 [AI Research] AI-Assisted Police Reports
The article "AI-Assisted Police Reports and the Challenge of Generative Suspicion" by Andrew Ferguson is a must-read for everyone in AI governance. Quotes and comments below:
"This Article addresses what happens when policing patterns are reduced to AI-generated police reports with police officers filling in the gaps with the help of their police-body camera audio. This is the question soon to be facing courts with Axon’s launch of “Draft One” – an Open AI GPT-4 Turbo model police report writing system. In simplified form, the audio from a police officer's body camera generates a draft police report, with various filters, inserts, and prompts built via generative AI large language models. Police reports become a fill-in-the-blanks “Mad-Libs” of suspicion, investigation, and fact-development ready for court. The promise for police departments is increased efficiency, consistency, and a significant time-savings around admittedly tedious paperwork. The danger for the criminal legal system is the digital poisoning of fact-based development in criminal trials by algorithmically altering the narrative." (page 4)
"The questions that arise involve three sets of concerns. The first concern involves training data and how the AI models were trained to fill in the reports. The second concern involves the transfer of data from the streets to the narrative report through automated translation/transcription technology. The third area of concern involves the final report and how generative AI transforms the structure and substance of the report." (page 23)
"Next generation AI might go even further, obviating the need for the police officer to narrate the scene via audio. Video analytics technology has already mastered the task of identifying simple objects and actions. Video analytics currently in use in more sophisticated police departments can search for a particular object (hat, backpack, gun) throughout thousands of cameras. In short order, those same images and the object recognition technology will generate a narrative of what is happening on the video. The objects will be recognized, the actions cataloged, and a narrative description provided. In the policing context, an AI would review the video and determine the person, crime, and severity to include in the police report. (...)" (page 53)
➡️ The topic has been all over the media recently (see for example this news article), so it's a great idea to read this in-depth overview of potential ethical & legal implications and to reflect on what this AI-powered tool might mean in the bigger picture of the criminal system.
👉 Link to the full paper here.
🎓 [Corporate] AI Governance Upskilling
I would welcome the opportunity to:
➵ Give a talk about the latest developments in AI, tech & privacy, discussing emerging compliance & governance challenges in these areas;
➵ Coordinate an in-company AI Governance training for your team
👉 Schedule a training program here.
🔥 [Job Openings] AI Governance is HIRING
Below are 16 new AI Governance positions posted in the last few days. Bookmark, share & be an early applicant:
1. 🇹🇷 Mastercard: Manager, AI Governance - apply
2. 🇯🇵 Rakuten: AI Governance Manager - apply
3. 🇮🇪 Analog Devices: Senior Manager, AI Governance - apply
4. 🇺🇸 Hyatt Hotels: Director Data & AI Governance - apply
5. 🇦🇹 Dynatrace: Product Manager, AI Governance - apply
6. 🇪🇸 Zurich Insurance: AI Governance Architect - apply
7. 🇺🇸 DaVita Kidney Care: VP Data Strategy, AI & Governance - apply
8. 🇺🇸 Cruise: Technical PM, Privacy, Data & AI Governance - apply
9. 🇵🇹 Siemens Energy: AI Governance Consultant - apply
10. 🇨🇿 PwC: Senior Consultant, AI Governance - apply
11. 🇺🇸 Snowflake: Senior PMM Data & AI Governance - apply
12. 🇨🇦 Deloitte: Senior Manager, AI Governance, Risk & Data - apply
13. 🇮🇳 BCE Global Tech: AI Governance Program Manager - apply
14. 🇩🇪 Dataiku: Software Engineer, AI Governance - apply
15. 🇺🇸 J.R. Simplot Company: AI Governance Analyst - apply
16. 🇸🇬 HTX: Lead Engineer, AI Governance - apply
👉 For more AI governance and privacy job opportunities, subscribe to our weekly job alert. Good luck!
🚀 [Last Call] Accelerate Your Career in September
Our 4-week AI Governance Bootcamps are live online training programs designed for professionals who want to upskill and advance their AI governance careers. 900+ professionals have already joined them
👉 Check out the programs & read testimonials here; sign up for information about upcoming programs here. If you have questions, write to me.
🔥 The September cohorts of our AI Governance Bootcamps start this week. Register for the AI Governance Package and get 20% off:
🙏 Thank you for reading
➵ If you have comments on this edition, write to me, and I'll get back to you soon.
➵ If you enjoyed this edition, consider sharing it with friends & colleagues and help me spread awareness about AI policy, compliance & regulation.
See you next week!
All the best, Luiza