Luiza's Newsletter

Luiza's Newsletter

Share this post

Luiza's Newsletter
Luiza's Newsletter
The "AI-First" Trend Is a Trap
Copy link
Facebook
Email
Notes
More

The "AI-First" Trend Is a Trap

AI-first strategies are presented as ‘productivity boosters,’ but often pose serious risks for companies, employees, and the public | Edition #209

Luiza Jarovsky's avatar
Luiza Jarovsky
Jun 05, 2025
∙ Paid
26

Share this post

Luiza's Newsletter
Luiza's Newsletter
The "AI-First" Trend Is a Trap
Copy link
Facebook
Email
Notes
More
1
10
Share

👋 Hi, Luiza Jarovsky here. Welcome to our 209th edition, now reaching 63,200+ subscribers in 168 countries. To upskill and advance your career:

  • AI Governance Training: Apply for a discount here

  • Learning Center: Receive free AI governance resources

  • Job Board: Find open roles in AI governance and privacy

  • Become a Subscriber: Read all my analyses


👉 A special thanks to Didomi, this edition's sponsor:

Have you ever heard of server-side tracking? It boosts site performance while improving data governance and ensuring compliance. This privacy-first approach gives you full control over data collection and consent management. Explore server-side tagging with Addingwell by Didomi.


The "AI-First" Trend Is a Trap

In the past few weeks, numerous companies, including Meta, Fiverr, Duolingo, and Shopify, have announced their “AI-first” approaches, which often involve pressuring employees to increase their AI usage and replacing entire teams with AI systems.

In today's edition, I argue that AI-first strategies, despite being presented as ‘productivity boosters’ and ‘cost-saving methods,’ often pose serious risks for companies, employees, and the public.

Most tech companies ignore it, but any AI deployment or integration should involve a previous assessment of potential downsides and risks from both legal and ethical perspectives.

-

Let's take a look at some recent “AI-first” announcements in the tech industry:

  • The CEO of Shopify posted a memo that stated, “using AI effectively is now a fundamental expectation of everyone at Shopify,” and “we will add AI usage questions to our performance and peer review questionnaire.”

  • The CEO of Fiverr sent an email to employees saying, “AI is coming for your jobs” and “get involved in making the organization more efficient using AI tools and technologies.”

  • The CEO of Duolingo announced it would “gradually stop using contractors to do work that AI can handle,” and the company recently released 148 AI-generated courses.

  • Meta plans to replace humans with AI to assess privacy and societal risks and fully automate ad creation using AI.

In AI-first announcements, companies usually stress the inevitability of AI disruption and the urgent need to comprehensively prioritize AI as a tool, process, and goal throughout the company's activities, or face the perils of obsolescence. They urge (sometimes threaten) employees to board this boat quickly or lose their jobs.

AI as a techno-social wave is definitely real, and it is already impacting various industries, technically, ethically, and legally. People's lives are also being affected, both personally and professionally, and many are losing their jobs or being forced to reskill.

However, it's also important to remember that from legal and ethical perspectives, AI systems and their applications are often considered risky products; it's not by chance that every single country is currently discussing its national approach to AI governance and regulation.

Rushing, pushing, and urging ubiquitous, unconstrained, and fast adoption, as fostered by the AI-first trend, is not a smart idea, and the consequences might not be worth it. Why?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More