📊 Five AI reports you can't miss
All you need to know about AI Policy & Regulation | Luiza's Newsletter #99
👋 Hi, Luiza Jarovsky here. Welcome to the 99th edition of this newsletter, read by 22,250+ subscribers in 125+ countries. I hope you enjoy reading it as much as I enjoy writing it.
A special thanks to Didomi, this week's sponsor. Check out their guide:
Check out Didomi's official guide to user consent in Europe. Packed with insights from millions of data points, this detailed study offers an in-depth analysis of how consent is gathered in 2024. Dive into average consent rates by industry, different consent banner styles, country comparisons, and more. Didomi also shares crucial information on the impacts of Google Consent Mode v2 and TCF v2.2. Find out where you stand and boost your data privacy strategy. Download the benchmark
📊 Five AI reports you can't miss
The AI reports don't stop coming in - and this was a particularly busy week. Below, I list five AI reports you can't miss, including the link to download them, why they matter, and highlighted quotes:
1️⃣ The German Federal Office for Information Security
➵ Title: "Generative AI Models - Opportunities and Risks for Industry and Authorities." Link.
➵ Why it matters: This report is especially detailed regarding AI-related risk and potential countermeasures. The document is a must-read for people developing AI or working on AI policymaking and regulation, especially pages 8-28.
➵ Quotes:
"LLMs are trained based on huge text corpora. The origin of these texts and their quality are generally not fully verified due to the large amount of data. Therefore, personal or copyrighted data, as well as texts with questionable, false, or discriminatory content (e.g., disinformation, propaganda, or hate messages), may be included in the training set. When generating outputs, these contents may appear in these outputs either verbatim or slightly altered (Weidinger, et al., 2022). Imbalances in the training data can also lead to biases in the model" (page 9)
-
"If individual data points are disproportionately present in the training data, there is a risk that the model cannot adequately learn the desired data distribution and, depending on the extent, tends to produce repetitive, one-sided, or incoherent outputs (known as model collapse). It is expected that this problem will increasingly occur in the future, as LLM-generated data becomes more available on the internet and is used to train new LLMs (Shumailov, et al., 2023). This could lead to self-reinforcing effects, which is particularly critical in cases where texts with abuse potential have been generated, or when a bias in text data becomes entrenched. This happens, for example, as more and more relevant texts are produced and used again for training new models, which in turn generate a multitude of texts (Bender, et al., 2021)." (page 10)
-
"The high linguistic quality of the model outputs, combined with user-friendly access via APIs and the enormous flexibility of responses from currently popular LLMs, makes it easier for criminals to misuse the models for a targeted generation of misinformation (De Angelis, et al., 2023), propaganda texts, hate messages, product reviews, or posts for social media." (page 11)
2️⃣ The New York State Bar Association
➵ Title: “Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence.” Link.
➵ Why it matters: as dozens of AI tools that focus on lawyers and legal work have been launched, it's essential to understand how the bar association sees them, the risks & potential opportunities, as well as the professional rules applicable.
➵ Quotes:
"In addition to competence, attorneys must resist viewing these tools through a techno-solutionism lens. 'Techno-solutionism' is the belief that every social, political and access problem has a solution based in development of new technology. In this case, some view generative AI as the solution to the access to justice problem. As infamously demonstrated in the Avianca case, in which an attorney utilized ChatGPT (a generative AI tool) to write a brief that contained fictitious legal precedent, attorneys cannot rely on technology without verification." (page 29)
-
"In fact, the California Bar Association recommends that lawyers inform their clients if generative AI tools will be used as part of their representation. The Florida Bar Association takes its recommendation a step further, suggesting that lawyers obtain informed consent before utilizing such tools. Whether an attorney informs the client or obtains formal consent, the ethical obligation to protect client data remains unchanged from the introduction of generative AI tools." (page 30)
-
"Pro bono attorneys have found that generative AI tools are excellent at summarizing and extracting relevant information from documents, translating legalese into plain English and helping to quickly analyze thousands of existing court forms. In addition, ChatGPT and other similar generative AI tools can identify potential clients’ legal needs and build out and maintain legal navigators." (page 42)
3️⃣ US, Australia, Canada, New Zealand, and the UK (various entities)
➵ Title: "Deploying AI Systems Securely." Link.
➵ Why it matters: This is a transnational AI report focused on security published jointly by major cybersecurity centers in the US, Australia, Canada, New Zealand, and the UK, and it conveys a uniform summary of their perspective on AI.
➵ Quotes:
"Understand the organization’s risk level and ensure that the AI system and its use is within the organization’s risk tolerance overall and within the risk tolerance for the specific IT environment hosting the AI system. Assess and document applicable threats, potential impacts, and risk acceptance." (page 3)
-
"Do not run models right away in the enterprise environment. Carefully inspect models, especially imported pre-trained models, inside a secure development zone prior to considering them for tuning, training, and deployment. Use organization approved AI-specific scanners, if and when available, for the detection of potential malicious code to assure model validity before deployment." (page 6)
-
"Educate users, administrators, and developers about security best practices, such as strong password management, phishing prevention, and secure data handling. Promote a security-aware culture to minimize the risk of human error. If possible, use a credential management system to limit, manage, and monitor credential use to minimize risks further" (page 8).
-
"AI systems are software systems. As such, deploying organizations should prefer systems that are secure by design, where the designer and developer of the AI system takes an active interest in the positive security outcomes for the system once in operation." (page 9)
4️⃣ Stanford Institute for Human-Centered AI
➵ Title: "Artificial Intelligence Index Report 2024." Link.
➵ Why it matters: This report is one of the most authoritative sources for data and insights on AI. It has more than 500 pages and covers AI in an extremely in-depth and broad manner.
➵ Top 10 takeaways: (pages 5 and 6)
1. “AI beats humans on some tasks, but not on all;
2. Industry continues to dominate frontier AI research;
3. Frontier models get way more expensive;
4. The United States leads China, the EU, and the U.K. as the leading source of top AI models;
5. Robust and standardized evaluations for LLM responsibility are seriously lacking;
6. Generative AI investment skyrockets;
7. The data is in: AI makes workers more productive and leads to higher quality work;
8. Scientific progress accelerates even further, thanks to AI;
9. The number of AI regulations in the United States sharply increases;
10. People across the globe are more cognizant of AI’s potential impact—and more nervous.”
5️⃣ World Economic Forum and Schwab Foundation for Social Entrepreneurship, in collaboration with EY and Microsoft
➵ Title: "AI for Impact: The Role of Artificial Intelligence in Social Innovation." Link.
➵ Why it matters: As AI spreads, it's essential to discuss its impact - both in terms of opportunities and risks - on social innovation and social initiatives in general, especially in the global south, where resources are scarce.
➵ Quotes:
"The report finds three primary impact areas where AI is making significant contributions: healthcare, with 25% of innovators using AI to advance access to health; environmental sustainability, with 20% of social innovators applying AI to tackle climate solutions; and economic empowerment, notably prevalent in lower-income countries where 80% of all initiatives aimed at enhancing livelihoods are based. But AI is also revolutionizing practices in other areas such as agriculture through predictive analytics and precision farming, addressing climate resilience and boosting productivity." (page 4)
-
"Analysis showed that 57% of social innovators addressing Good Health and Well-being are adding AI into core services. They are leveraging AI capabilities such as ML and NLP to enhance how their current products are offered and to strengthen the quality, scale, speed or efficiency of their solutions" (page 18)
-
"While ML is the most deployed AI capability, nearly 15% of social innovators are deploying some form of NLP. This includes the adoption and deployment of generative AI, as displayed by the most prominent examples such as OpenAI as well as Benevolent AI. The most prevalent combination of AI capabilities is the joint application of ML and NLP. It allows organizations to analyse vast amounts of data, identify patterns and make recommendations at high efficiency and thus low cost." (page 20)
📜 New AI Bill covering AI training transparency
➡️ US representative Adam Schiff proposed a bill to require transparency from companies on their use of copyrighted material to train AI systems.
➡️ The proposed bill - the Generative AI Copyright Disclosure Act - would require prior notice to the Register of Copyrights when releasing a new generative AI system trained on copyrighted material.
➡️ According to Representative Schiff: “This is about respecting creativity in the age of AI and marrying technological progress with fairness."
➡️ If it becomes law, it might be an important step in the US to support artists' rights.
📜 New AI Bill in Pennsylvania
➡️ A New AI Bill was introduced in Pennsylvania covering AI disclosure and AI-based child abuse material. Quote:
"A disclosure under this subclause must state that the content was generated using artificial intelligence, must be displayed in the first instance when the content is presented to the consumer, must be presented in a manner reasonably understandable and readily noticeable to the consumer and must be presented in the same medium as the content"
➡️ The Bill was approved by lawmakers and will be sent to the state Senate for its consideration.
📑 Excellent AI paper
➡️ The paper "An Elemental Ethics for Artificial Intelligence: Water as Resistance Within AI’s Value Chain" written by Sebastián Lehuedé is a must-read for everyone interested in AI ethics. Quotes:
"Studies looking at the relationship between AI and the environment, known as sustainable AI, also show relevant gaps. In this case, an excessive focus on the development of ‘green’ AI applications has tended to come at the expense of broader questions about the environmental impact of AI itself (Vaughan et al., 2023; Brevini, 2021; Van Wynsberghe, 202). In contrast, studies exploring the environmental footprint of AI infrastructure and its value chain have started to gain traction (e.g., Ligozat et al., 2022)." (page 5)
-
"For the communities living close to the Atacama Salt Flat, lithium mining equals water mining. This idea suggests that the AI industry is also a water industry considering the central role, and scarcity of water required to cool off data centres and obtain the minerals that make up AI devices. While AI companies are usually depicted as producers of code, algorithms and applications, an elemental approach incorporates the extraction of water and other basic components of life as crucial for AI companies’ operations. The fact that such an extraction is outsourced does not make these operations less important to their business models." (page 12)
-
"A focus on elemental relations can also expand approaches to AI sustainability. While quantitative measurements over the use of water and other elements can help provide a general picture, they do not specify how these harms manifest in specific settings. Situated, empirical and qualitative studies attending to the needs and visions of the communities and environments participating within AI value chain are required. Based on MOSACAT and the Council’s activism, an elemental approach can help understand how resource extraction can affect the unique relationship and dependencies that communities hold with their environments." (page 23)
➡️ Read the full paper here.
📚 Join our AI Book Club (915+ members)
We are currently reading “The Worlds I See” by Fei-Fei Li. We'll meet to discuss the book on May 16. Check out our booklist and join us!
🔎 Job board: Privacy & AI Governance
If you are looking for job opportunities in privacy, data protection, and AI governance, check out the links below (I have no connection to these organizations, please apply directly):
➡️ Product Privacy Policy Manager at Meta - London (UK): "We seek a highly motivated data protection and privacy policy professional to fill the role of Privacy policy manager for the Meta family of companies. The ideal candidate will have an excellent track record in privacy and data protection policy issues, and be comfortable working on product advising. Candidates should have relevant prior experience working..." Apply here.
➡️ Project Lead, AI Governance at BMW - Munich (Germany): "As a Project Lead AI Governance, you are responsible for developing policies and standards that establish the ethical, legal, and regulatory framework for the use of AI in our organization. This includes creating guidelines for data usage, data privacy, transparency, and fairness. You ensure that AI applications and projects comply with applicable legal..." Apply here.
➡️ Assistant Director, AI Governance at Principal - Hybrid - Iowa (US): "You'll be responsible for driving the build out and maintenance of our AI Governance program across the enterprise. This will include developing a consistent set of policies, standards, and practices to enable strategic and ethical use of artificial intelligence, data and analytics all while articulating the value of AI governance. You will create and maintain..." Apply here.
➡️ Associate Counsel, Privacy at Visa - Hybrid - California (US): "Visa’s growing Global Privacy Office is Visa's central touchpoint for global data strategy, privacy compliance, information governance, and legal aspects of information security. We engage with and provide privacy services to Visa business teams. We operate Visa's global privacy program, advising Information Security, HR, and other groups..." Apply here.
➡️ Data Governance & Privacy Specialist at Amazon - Various cities (US): "You will deep dive into privacy compliance issues across multiple business lines and work with internal stakeholders to ensure privacy requirements are promptly addressed. If you are interested in enabling exceptional privacy and data management standards for customers, energized by a fast-paced and evolving environment, can deal with..." Apply here.
➡️ Subscribe to our privacy and AI governance job boards to receive our weekly email alerts with the latest job opportunities.
🎓 Learn with peers, upskill, get a certificate
If you enjoy this newsletter, you can't miss my live online 4-week Bootcamps:
➡️ Emerging Challenges in Privacy, Tech & AI. Starts on May 7. Read more.
➡️ The EU AI Act. Starts on May 8. Read more.
🙏 Thank you for reading!
If you have comments on this week's edition, I'll be happy to hear them! Reply to this email (or use my personal page), and I'll get back to you soon.
Have a great day!
Luiza