đŻ From Responsible AI to AI Governance
AI Governance Professional Edition | Bonus Edition | #145
đ Hi, Luiza Jarovsky here. Welcome to the 145th edition of this newsletter on AI policy, compliance & regulation, read by 38,000+ subscribers in 155+ countries.
đ This is a bonus edition of the AI Governance Professional Edition, with my critical analyses of AI compliance & regulation topics. It's published weekly to paid subscribers only. I hope you enjoy reading it as much as I enjoy writing it.
đŻ From Responsible AI to AI Governance
As I've commented earlier in this newsletter, âresponsible AIâ has become a buzzword amid the current Generative AI hype.
A tech company that publicly announces, as part of its marketing strategy, that it wants to âwin the AI raceâ will not be well received, as this will be seen as a form of disregard for fundamental rights.
In recent years, many tech companies have started announcing instead that they are âcommitted to using AI responsibly," often followed by launching a list of principles, standards, an annual report, and their perspective on responsible AI. It has become part of their branding and marketing materialââwho we areâ as a companyâand there has been little scrutiny over what is being said in those documents or whether they reflect the reality of the company's products and services.
However, an empty responsible AI pledge without concrete action means nothing. It's the same as saying, âWe value your privacyâ while engaging in practices that go against data protection law (something we have seen time and again). Responsible AI must evolve.
âĄď¸ Responsible AI, laws, and rights
Let's begin with the law: it must also be respected when AI is involved. Even if a country does not yet have a comprehensive AI law, fields of law such as copyright, data protection, liability, and consumer protection apply to AI, too, and companies must comply.
Here lies an important weakness in most companiesâ responsible AI strategy, in my view: they lack legal substance. While they talk about ethics and principlesâsuch as fairness, reliability, safety, privacy, and inclusivenessâthey rarely address rights, obligations, and liability behind these principles, and how the company ensures that the law is being respected, especially when new, unexpected risks might emerge.
It's not that principles and ethics are not important. They are. But they're the beginning of the conversation. For example, to know if an organization respects privacy, I don't want to hear, âWe value your privacy.â It means nothing. I want to know, in practice, how they are respecting data protection laws and data subjectsâ rights. I want to see their data protection impact assessment or a realistic evaluation of what data protection risks might be. If they're covered by the GDPR, I want to know how they're complying with it.
In recent years, privacy practices have faced increasing scrutiny. Responsible AI should follow the same pattern: if a company claims to use or develop AI responsibly, it should receive proper scrutiny, not just rely on buzz words.
According to the International Organization for Standardization (ISO), responsible AI is:
âthe practice of developing and using AI systems in a way that benefits society while minimizing the risk of negative consequences. Itâs about creating AI technologies that not only advance our capabilities, but also address ethical concernsâŻâ particularly with regard to bias, transparency and privacy. This includes tackling issues such as the misuse of personal data, biased algorithms, and the potential for AI to perpetuate or exacerbate existing inequalities. The goal is to build trustworthy AI systems that are, all at once, reliable, fair and aligned with human values.â
Aren't privacy, consumer protection, and copyright essential when attempting to minimize the risk of negative consequences? Why donât statements about responsible AI cover these topics in-depth, allowing the public to see how they are being protected in practice?
As a lawyer, I can see why it might be considered âwiserâ to stay at the âethicalâ level, avoiding detailed discussions on impact assessments and concrete legal risks that were taken into consideration. This would alert the public, advocates, and regulators, potentially leading to unintended consequences. By focusing on principles, ethics, and technical measures, it's easier to avoid strict scrutiny (and fines).
As a consequence, in my view, many of the existing responsible AI statements and reports published by tech companiesâwhich are often large-scale AI providers and deployersâseem overly vague and almost impossible to scrutinize. Most importantly, they obscure the public's perspective on meaningful legal risks, which, by not being surfaced, remain unnoticed.
Before I discuss potential solutions, I would like to illustrate with examples:
1ď¸âŁ Microsoft
Microsoft has a dedicated page for responsible AI, with various subsections including overview, principles and approach, tools and practices. They have also recently published the 2024 Responsible AI Transparency Report.
These pages don't cover data protection, manipulation, hallucination, or consumer protection issues. They also fail to mention the myriad of AI copyright lawsuits targeting Microsoft and what is being done to tackle these copyright issues they're being accused of (together with OpenAI).
Isn't copyright important if they want to claim that their AI benefits society? Do they care about what artists and authors are saying? Then why don't they explicitly address it? What are the practical mechanisms being developed in that context? Shouldn't Microsoft discuss its partnership with OpenAI more openly, including how they partner to mitigate risks?
2ď¸âŁ Google
Google has this page covering responsible AI practices, which mostly focuses on their view and how others should approach responsible AI.
This other page, under âresponsibility,â presents sections on AI principles, responsible AI practices, guiding responsible AI development, AI governance & operations, social good, and policy.
After reviewing each of these sessions individually, my comment is similar to the above regarding Microsoft: I didn't find legal issues covered, mostly case studies, research, principles, and objectives. It's carefully crafted with beautiful graphics, but it looks more like marketing materialâadvertising innovation and Google's vision for the futureârather than efforts dealing with known legal risks, impacts on rights, and so on.
As an example of the lack of legal focus, Google recently faced a fiasco involving historically inaccurate images generated by Gemini. They answered with a very brief statement on X/Twitter.
That incident, as an example, was serious and legally relevant. If the root of the problem is not solved, it can be catastrophic, especially as Google's products are used by hundreds of millions of people daily. It could negatively impact rights and affect legal fields such as constitutional law, consumer law, and fundamental rights, broadly speaking. However, I found nothing about that on their Responsible AI page. There is nothing concrete about measures to mitigate concrete risks observed by thousands of users. Why not? Why don't we demand accountability?
Google has many products that rely on AI, each used daily by millions, and sometimes billions, of users. I want to know, product by product, what is being done to respect people's rights. If a responsible AI page does not cover that, then what is it really for?
3ď¸âŁ Meta
In its responsible AI page, Meta writes:
âOur Responsible AI efforts are propelled by our mission to help ensure that AI at Meta benefits people and society. Through regular collaboration with subject matter experts, policy stakeholders and people with lived experiences, weâre continuously building and testing approaches to help ensure our machine learning (ML) systems are designed and used responsibly.â
This is the typical statement most tech companies have on their responsible AI page. Now, I would like to compare it with what Mark Zuckerberg recently said during Meta's Q4-2023 earnings call, which happened on February 1, 2024 (public transcript):
âWhen people think about data, they typically think about the corpus that you might use to train a model up front. On Facebook and Instagram there are hundreds of billions of publicly shared images and tens of billions of public videos, which we estimate is greater than the Common Crawl dataset and people share large numbers of public text posts in comments across our services as well. But even more important than the upfront training corpus is the ability to establish the right feedback loops with hundreds of millions of people interacting with AI services across our products. And this feedback is a big part of how we've improved our AI systems so quickly with Reels and ads, especially over the last couple of years when we had to rearchitect it around new rules.â
So Mark was highlighting how great Meta's dataset isâhundreds of billions of publicly shared images, tens of billions of public videos, and large numbers of public text posts in commentsâand bragging about their advantages during the current AI race.
The âdetailâ he forgot to mention here is that people shared this data on Facebook and Instagram to connect with friends and colleagues, not to give it to Meta for free to train their AI systems. Also, if people knew that Meta would use their posts and comments to train AI, they might have been more careful with what they posted, especially given the risk of personal data leakage or AI hallucinations misinterpreting their sometimes deeply personal and intimate posts.
But Meta didn't care and kept collecting personal data to train its AI anyway. It still collects. If you have been reading this newsletter in recent months, you know Meta has been legally challenged both in the EU and in Brazil due to their AI practices.
I would say that if their mission is âto help ensure that AI at Meta benefits people and society,â they should have cared about people's expectations and asked for people's consent before collecting their data to train AI. Many people would have rejected this request. But Meta didn't ask for people's consent and collected it anywayâin the same way they collect personal data for behavioral advertising without consent.
There is no scrutiny of Meta's responsible AI statements (there is also no required legal format for these statements). We just sit and let Metaâor any other companyâsay whatever beautiful responsible AI statement they want.
So, where do we go from here?
âĄď¸ Responsible AI must evolve
What is my goal with the analysis above? Responsible AI must evolve to address the law, legal risks, and existing rights. Companies must take full accountability for the products they develop and deploy, facing scrutiny for every statement they add to their responsible AI policy.
I don't want to know only the principles that the company follows, its values, and its innovation goals. I want to know, for example:
âľ How its products impact real people now and the risks identified through evaluations and impact assessments;
âľ What third parties and expert groups were involved during its product assessment, and what standards and safety thresholds it has met;
âľ How they are concretely addressing issues that have been faced by the public, such as manipulation, psychological harm, reputational harm, copyright infringement, bias, discrimination, and so on.
âĄď¸ From Responsible AI to AI Governance
Maybe there will be people who will say that a responsible AI page or report is not the place for that. What's the utility of a responsible AI document or page if it does not cover real cases and concrete examples of real-world harm involving the company's products and services? If the company has no accountability for what it's saying and isn't required to take any preventive or remedial action?
Responsible AI for many companies has become a buzzword and marketing tool. This must change immediately, especially given AI's accelerated adoption rate worldwide.
To increase companiesâ accountability over their AI practices, I propose moving from responsible AI to AI governance. Using Credo AI's definition, AI governance is:
"the set of processes, policies, and tools that bring together diverse stakeholders across data science, engineering, compliance, legal, and business teams to ensure that AI systems are built, deployed, used, and managed to maximize benefits and prevent harm. AI Governance allows organizations to align their AI systems with business, legal, and ethical requirements throughout every stage of the ML lifecycleâ
The AI governance perspective brings a more comprehensive, professional, and accountable approach to AI development and deployment. We don't need buzzwords; we need full accountability for risks and harms, including from a legal perspective.
âĄď¸ AI governance policies
In the same way that most websites today have a privacy policy, where they follow a somewhat structured approach to describe their privacy practices and how they comply with data protection law obligations (in a transparent, accessible, and clear manner), the next step in AI governance should be the publishing of AI governance policies.
These documents should contain a structured description of the companiesâ AI governance practices for each AI system or general-purpose AI model they make available to the public.
The AI governance policy should contain rights that might be impacted through the use or deployment of their AI system or general-purpose AI model and concrete measures being implemented to reduce the risk of harm. They should use concrete examples from the company's products and their testing phase to remain focused on real risks (and not abstractions, principles, and goals that look like marketing or public relations efforts).
In these AI governance policies, the public, including advocates, experts, lawmakers, and policymakers, should be able to find any relevant information regarding risks and harm from a particular AI system or general-purpose AI model. Also, as we have been fighting for in the context of privacy policies, we should demand AI governance policies to be easily accessible, not unnecessarily long, using plain and straightforward language. We shouldn't have to navigate a complex interface to find them on a website.
It's time for responsible AI to evolve from empty pledges to legal compliance.
Hopefully, transitioning from responsible AI to AI governance will bring more legal accountability to the field of AI, helping to hold companies liable for the risks and harms they cause to individuals and society and reminding them of their obligation to respect the law.
đ Thank you for reading!
If you have comments on this edition, write to me, and I'll get back to you soon.
AI is more than just hypeâit must be properly governed. If you found this edition valuable, consider sharing it with friends and colleagues to help spread awareness about AI policy, compliance, and regulation. Thank you!
Have a great day.
All the best, Luiza