The Rise of Technical AI Policies
In parallel with the regulatory debate, we should focus on a variety of technical AI policies to detect early misalignments and push AI development and deployment in the right direction | Edition #216
๐ Hi everyone, Luiza Jarovsky here. Welcome to my newsletter's 216th edition, now reaching over 66,300 subscribers in 168 countries. It's great to have you on board! To upskill and advance your career:
AI Governance Training: Apply for a discount here
Learning Center: Receive free AI governance resources
Job Board: Find open roles in AI governance and privacy
AI Book Club: Discover your next read in AI and beyond
Become a Paid Subscriber: Never miss my full analyses:
๐ Join the 23rd cohort in September
If you are looking to upskill and explore the legal and ethical challenges of AI, as well as the EU AI Act, I invite you to join the 23rd cohort of my 15-hour live online AI Governance Training, starting in September.
Cohorts are limited to 30 people, and over 1,200 professionals have already participated. Many have described the experience as transformative and an important step in their career growth. [Apply for a discounted seat here].
The Rise of Technical AI Policies
Even though we often get lost in the AI regulation whirlwind (from the delayed Code of Practice for GPAI in the EU to the rejected AI moratorium in the U.S.), it is important to remember that legal measures are not the only way to protect people against AI harms and support fundamental rights.
In today's edition, I argue that in parallel with the regulatory debate, we should focus on a variety of technical AI policies, including interdisciplinary and direct interventions, to detect early misalignments and help push AI development and deployment in the right direction.
-
Many in the AI governance community overestimate the role of the law in establishing strong and effective AI governance frameworks.
Indeed, law is an essential coercive tool that can shape behaviors and expectations, penalize non-compliance, introduce principles and goals, and directly influence market practices.
However, the law can only go so far.
First, due to its formal and procedural nature, the law will always be slower than technological development, even when future-proofing mechanisms are in place.
Second, the law establishes a system of rules, provisions, principles, and standards that must be interpreted, operated, and applied by people. It is not an exact science, and by default, it is highly dependent on the actions of the humans behind it.
As a consequence, people, companies, and governments are constantly attempting to twist, bypass, manipulate, and exploit the legal system.
That absolutely does not mean that we should undervalue the law. It has an essential social, economic, and political role, which is, at the same time, limited and imperfect.
In the context of strong AI governance frameworks, we must, however, acknowledge these limitations and focus on additional mechanisms, including technical ones, to protect people and shape the future of AI.
The field of AI copyright offers an interesting case study of what these technical AI policies could look like.
A few days ago, two AI copyright decisions in the U.S. sided with AI companies:
In the first one, the judge sided with Anthropic in a lawsuit filed by book authors, partially accepting the company's fair use claim for AI training (when there were no pirated copies involved).
In theย second one, the judge accepted Meta's claim of fair use for AI training (the judge explicitly stated that in most cases, AI training does infringe on copyright).
This was a major blow for artists and the content creators community, who were expecting a legal acknowledgement of the wrongfulness of the lack of consent and compensation, as well as the threat to their livelihood.
As I mentioned earlier, the law is only part of the puzzle and one of the potential solutions. The battle is not lost.
For example, six days ago, internet infrastructure company Cloudflare announced new tools for content creators to control whether AI bots are allowed to access a website's content for AI training (for example, a publisher can choose to block AI bots only on the parts of their website that are monetized through ads).
Regardless of what the law says or how it is enforced in different parts of the world, there is a globally available technical mechanism that can help publishers say no to AI training, even if it is legally allowed.
Technical AI policies can take different forms: