Luiza's Newsletter

Luiza's Newsletter

Share this post

Luiza's Newsletter
Luiza's Newsletter
The Problem With "AI Fluency"
Copy link
Facebook
Email
Notes
More

The Problem With "AI Fluency"

There is a growing tech trend in which questioning AI and recognizing that it's not the right tool for every task are no longer welcome | Edition #210

Luiza Jarovsky's avatar
Luiza Jarovsky
Jun 09, 2025
∙ Paid
19

Share this post

Luiza's Newsletter
Luiza's Newsletter
The Problem With "AI Fluency"
Copy link
Facebook
Email
Notes
More
9
Share

👋 Hi, Luiza Jarovsky here. Welcome to our 210th edition, now reaching 63,500+ subscribers in 168 countries. To upskill and advance your career:

  • AI Governance Training: Apply for a discount here

  • Learning Center: Receive free AI governance resources

  • Job Board: Find open roles in AI governance and privacy

  • Become a Subscriber: Read all my analyses


⛱️ Ready for Summer Training?

If you are looking to upskill and explore the legal and ethical challenges of AI and the EU AI Act in an in-depth and interactive way, I invite you to join the 22nd cohort of my 15-hour live online AI Governance Training, starting in mid-July (9 seats left)

Cohorts are limited to 30 participants, and over 1,200 professionals have already participated. Many have described the experience as transformative (testimonials here). This could be the career boost you need! [Apply for a discount here].

Join the Next Cohort


The Problem With "AI Fluency"

As the “AI-first” narrative gains more traction in the tech industry, a few days ago, a post from the CEO of Zapier describing how the company measures “AI fluency” went viral.

In today's edition, I discuss Zapier's approach and the problem with the expectation of “AI fluency” that is spreading within tech companies and beyond, as it might be harmful and could backfire, both legally and ethically.

-

In addition to the memos, leaked emails, and PR announcements showing how each company is prioritizing AI (Meta, Duolingo, Shopify, and Fiverr are recent examples - read my recent analysis), there are also worldwide “AI fluency” efforts aiming to incentivize employees to use AI no matter what.

If you search “AI fluency” on LinkedIn, you'll see that there is a growing number of posts about the topic, as well as professionals who have added this skill or title to their profiles. There are also job openings that mention or require it, showing that it's a terminology that is becoming normalized and an integral part of the current AI zeitgeist.

Before I continue, a reminder that learning about AI, upskilling, and integrating it professionally is extremely important. I wrote about AI literacy multiple times in this newsletter, including what laws, such as the EU AI Act, say about it.

AI literacy, however, demands critical thinking, as well as ethical and legal awareness, including the ability to know when not to use AI. This is the opposite of what recent corporate approaches to “AI fluency” are promoting.

Let's take a look at Zapier.

A few days ago, their CEO posted the table below. It shows how they assess AI fluency by role and maps skills across four levels (unacceptable, capable, adoptive, and transformative):

In the post, he also shared AI-related questions Zapier has been asking during interviews, and highlighted that requirements are role-specific and “the bar will keep rising.”

Let me start by saying that when discussing “fluency,” the four categories necessarily involve a value judgment: "unacceptable" is worse than "capable", which is worse than "adoptive", which is worse than "transformative". The optimal column is the last one (“transformative”).

With that in mind, pay attention to the "unacceptable" column.

Across different roles and in various contexts, the employee described in this column is someone who is generally skeptical of AI tools and avoids them. This is an employee who will likely be fired soon, or a job applicant who will not be hired as they do not fit Zapier's expectations.

Pay attention to how this doesn't necessarily mean the person doesn't understand AI, or doesn't know how to use it.

In fact, this person might be consciously rejecting AI (or being critical of it) for a specific task because they know AI:

  • will lead to inaccurate outputs

  • will lead to low-quality outcomes

  • is proven not to be the best tool to accomplish that specific task

  • is legally risky for that specific task

  • might require additional safety or ethical checks

  • [whatever other perfectly legitimate reason to reject AI deployment in a specific professional context]

Zapier and other companies spreading the AI fluency dogma don't want to hear about that and don't accept that AI will be rejected, even if it's for a legitimate reason.

Critical thinking around AI, and the understanding that it's an automating tool that is not suitable for all tasks, is not welcome, even though we have more and more studies showing its limitations (the latest one from Apple).

For Zapier and other companies promoting similar AI fluency approaches, AI is always good, and it must be accepted, implemented, and praised.

Zapier is a private company, and its management team is free to act as it wishes within legal boundaries. This type of corporate culture, however, can be harmful. Why?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More