📜 Principle-Based AI Governance
AI Governance Professional Edition | Paid-Subscriber Only | #155
👋 Hi, Luiza Jarovsky here. Welcome to the 155th edition of this newsletter on AI policy, compliance & regulation, read by 42,400+ subscribers in 160+ countries. I hope you enjoy reading it as much as I enjoy writing it.
💎 This is an exclusive AI Governance Professional Edition featuring my in-depth analyses of AI compliance and regulation topics, which you won't find anywhere else. It's an excellent way to stay ahead in the fast-paced field of AI governance.
💼 Level up your career! This January, join me for the 16th cohort of our acclaimed AI Governance Training (8 live lessons; 12 hours total). Over 1,000 professionals have already benefited from our programs—don’t miss this opportunity. Students, NGO members, and professionals in career transition can request a discount.
📜 Principle-Based AI Governance
As I mentioned earlier this week, the Brazilian Senate, representing the world's 7th most populous country, voted on the proposed AI bill two days ago and approved it. The bill now awaits approval from the Chamber of Deputies before it becomes law.
Today, I would like to discuss interesting aspects of the proposed law, especially its core values, principles, and affected persons’ rights. I'll focus on how Brazil's principle-based approach could offer important insights to lawmakers, policymakers, and AI governance professionals worldwide.
1️⃣ Modeled after the EU AI Act
Before I start, I would like to highlight how the proposed law in Brazil embraces many of the features of the EU AI Act, such as the risk-based approach, the definition of AI, the concept of regulatory sandboxes, aspects of the governance system, and more.
The Brussels effect—coined by Prof. Anu Bradford—was expected to occur, at least to some extent, with the EU AI Act, as various countries drafting their AI laws are looking to the EU as a model example. However, importing concepts from another country or region must be done carefully, as it could lead to unintended consequences. In Brazil's case, for instance:
Some provisions are almost straightforward translations of the EU AI Act; however, they are naturally without the full support of the EU legal framework. This is a missed opportunity to create an innovative national approach that better serves the local legal system beyond the exact wording chosen by the EU (which is sometimes confusing, incomplete, or flawed);
Some provisions, although inspired by the EU AI Act, adopt it only partially, leaving gaps that could prove challenging. One example in that context is the list of definitions established in Article 4 of Brazil's proposed bill. The list is very short, leaving out important concepts that were clarified in the EU AI Act. This lack of definitions or clarifications regarding what each AI-related concept means in practice could lead to enforcement challenges.
On the positive side, Brazil did a great job explicitly establishing a comprehensive framework of foundational values, core principles, and affected persons' rights, which we don't see in the EU AI Act. This principle-based regulatory approach could prove beneficial and effective in regulating AI, and lawmakers, policymakers, and AI governance professionals worldwide may want to take note.
An additional positive aspect of the proposed Brazilian bill is that its provisions are written in a much more succinct and organized manner than those in the EU AI Act. Excessive complexity and verbosity can lead to less effective provisions, both due to reduced understanding and potential abuse by malicious parties.
Below, I delve into the principle-based aspects and the rights framework proposed by the Brazilian bill, highlighting their potential significance for AI governance professionals worldwide.
2️⃣ Core Values
The proposed AI bill establishes in its second article a list of core values that are the foundation of AI development, implementation, and use in Brazil. These are the core values:
“human centrality;
respect for human rights and democratic values;
the free development of personality;
environmental protection and sustainable development;
equality, non-discrimination, plurality and respect for labor rights;
technological development and innovation;
free initiative, free competition and consumer protection;
privacy, data protection and informative self-determination;
the promotion of research and development with the aim of stimulating innovation in the productive sectors and in the public sector;
access to information and education, and awareness of artificial intelligence systems and their applications.”
╰┈➤ Why it matters:
Establishing a clear list of core values helps legal professionals, practitioners, and society better understand how AI development will be approached by the legal system. It also helps to fight a somewhat growing tendency towards ‘AI exceptionalism,’ where legal experts and tech professionals argue that “AI is different” and one or more existing legal frameworks are not applicable when AI models and systems are involved. We've observed this particularly in the fields of data protection and the copyright, as I have been discussing in this newsletter.
For AI governance professionals drafting companies’ internal AI policies, for example, it could be a good idea to start by examining the company's core values and why it decided to develop or deploy AI, as well as other equally important and must-follow values/policies.
The equivalent provision in the EU AI Act would be its first article (subject matter), but it's not as detailed and comprehensive as the Article above.
3️⃣ Core Principles
The third article of Brazil's proposed bill states that the development, implementation, and use of AI must observe good faith as well as the principles below:
“inclusive growth, sustainable development and well-being;
self-determination and freedom of decision and choice;
human participation in the artificial intelligence cycle and effective human supervision;
non-discrimination;
justice, equity and inclusion;
transparency, explainability, intelligibility and auditability;
reliability and robustness of artificial intelligence systems and information security;
due process, contestability and adversarial proceedings;
traceability of decisions during the life cycle of artificial intelligence systems as a means of accountability and assignment of responsibilities to a natural or legal person;
accountability, liability and full compensation for damages;
prevention, precaution and mitigation of systemic risks arising from intentional or unintentional uses and unforeseen effects of artificial intelligence systems; and
non-maleficence and proportionality between the methods employed and the determined and legitimate purposes of the artificial intelligence systems.”
╰┈➤ Why it matters:
The list of principles above reminds me of Article 5 of the EU General Data Protection Regulation (GDPR), which similarly outlines key principles that should be respected when processing personal data. Anyone in the data protection field knows how important Article 5 is and how influential these principles have been since the GDPR became enforceable.
Principles shape how a certain law will be applied. They offer interpretative guidelines to apply specific provisions, help solve interpretative issues, and can change the outcome of data protection enforcement. These principles are so important that numerous guidelines have been issued by national EU data protection authorities and the European Data Protection Board (EDPB), which clarify how to interpret and apply these principles in practice.
Going back to AI regulation, the EU AI Act, for example, does not have a full list of principles. One could say that Article 1 of the EU AI Act when it states: “The purpose of this Regulation is (…) to promote the uptake of human-centric and trustworthy AI,” is actually reaffirming the principles behind human-centric and trustworthy AI. But what are these principles? What is the official source from which to learn about these principles?
Specific provisions also cover some of the principles above; examples are human oversight (Article 14) and various transparency obligations brought by the EU AI Act. However, because they are specific, they cannot serve as general guidelines for interpretation and enforcement like principles can.
Also, various recitals in the EU AI Act refer to ethical and legal principles; however, the recitals do not have the same legal status as the provisions themselves. No wonder the GDPR lawmaker added a provision (Article 5) listing the principles that inform EU data protection law: the provision makes them enforceable, and a company can be fined for not complying with them.
Brazil's proposed AI bill includes a broad and comprehensive list of principles, and technically, any company directly or indirectly infringing them can be fined. The bill might also affect liability rules: if these are tied to the main provisions, direct or indirect non-compliance with the list of principles might also lead to liability infringement.