12 Comments
User's avatar
Laura Rose's avatar

Thank you! As a professional writer and creative director who became the victim of AI-mania, I have been shouting this to my fellow creatives. Thanks for the wonderful work you do.

Expand full comment
Adam Brinegar's avatar

I like the idea. Would be interesting to operationalize it (put numbers around it). For example, what percent of creatives and lawyers are below the mediocrity line? Probably not many (yet). How good are executives at understanding the mediocrity line (how often are they duped by the appearance of mediocre work because it helps the bottom line)? I would say more.

Expand full comment
Sue's avatar

This is an excellent post, and I appreciate it very much. I intend to share your link with others in my profession who would benefit from your perspectives. Thank you. Hoping to see many sign up for your paid version. I have found it very worthwhile.

Expand full comment
Karen Doore's avatar

Excellent article! As an interdisciplinary creative professional and scholar activist with diverse experiences across information technology domains, your assessment of the impact of AI resonates. When I was an educator, I designed projects for creatives that with the goal of inspiring students to become deeply engaged in learning as a process of expressing their inner visions in creative ways that minimized incentives for cheating. I aspired to be a role model for learning as a life-long process where students could pass the course by demonstrating basic understanding through a scaffolded creative process, however, many students would create amazing projects because they fell in love with the process and the emergence of what they were creating....where students ended up with portfolio projects they could discuss during interviews. AI as used as a filter in HR for tech companies rewards exam-performance capacities rather than rewards for ethically grounded creative problem solving which requires the ability for fluidity of counterfactual analysis and weaving of information using divergent and convergent skills. This AI for HR mirrors the exam based proficiencies that align with the status quo....and reward shallow learning. So there is a problem with the 'factory model' of education that is a historical pattern in some STEM domains. Creative and contemplative arts practices support learning to pause, reflect, and learn from challenges....that is the path to lifelong learning from experiences as dynamic adaptive intelligence of living systems. Learning to use AI effectively as a creative takes attention, reflection, discernment, ethics, and a commitment to engage with kindness as Humanity learns to adapt to these exponential changes in our ecosystems. Keep up the good work.

Expand full comment
JS036215's avatar

Differences between corporate automation and cost-cutting lingo (for example, right-sizing, streamlining, upskilling, broad-banding, off-shoring) and what it implies in practice (shifting work to consumers, using cheaper and Taylorized labor, using automation in conjunction with less-skilled labor, intensifying work for core staff using automation, transferring knowledge from people to knowledge-bases, making individual people more replaceable) are important to consider when you think about employees demonstrating personal excellence.

Based on past automation trends, down-skilling is the norm with automation and Taylorization of work. A much smaller group of people are demonstrably "excellent" and those people rely on automation systems to allow them to manage the intensified workload they carry. In between are contractors, and their knowledge and work output will be captured and automated ever more voraciously, making their work less valuable over time.

For a recent example, Anthropic has changed their terms to allow Claude to train on user content by the end of September, 2025. Power users supplying Claude new information or prompted behaviors will find aspects of their own work became part of Claude's starting reportoire. Users are now paying Anthropic to use them as contractors.

The original artisan vs factory debate may also have involved people claiming "SLOP" but the important topics of debate were unemployment, down-skilling, and the cheapening of individuals' labor. It's not whether factories produced artisan-quality vs mediocre work, it's that factories put artisans out of work. You could hire ten low-skill (less trained and practiced) people to do cheaply what one very skilled, practiced and expensive artisan could do on her own. The factory product might be lower quality but it was also cheaper to produce and cheaper for consumers.

The dynamic of shifting work onto consumers of products and services is also important to consider. If AI enable mediocre outcomes in general but serviceable outcomes when people tolerate them (like we tolerate automated phone systems, an inferior experience compared to conversation with a real person) then companies successfully shift the cost of their products or services onto consumers through the use of mediocre AI systems.

Are highly skilled tech support reps in demand? Their jobs are vulnerable to automation, making even core staff potentially slip to contractor status and then to some down-skilled, low-paying job. They may turn to GIG work, multiple jobs, and working extra hours. Wasn't it Marc Benioff recently bragging about automating thousands of customer support jobs? Part of that shift involved a change in expectations of people using the automated system as well as training of them (directly or indirectly) to use it well.

So when you write that AI are mediocre and excellence will remain in demand, there is some nuance there. AI systems (and robotics) can make technological unemployment ever more of a thing as they become cheaper or better designed. They can replace the excellent and the novice alike.

You mentioned "halucinations". Well, at the moment, there is a quirk to LLM AI as automation systems: current LLM systems do not produce deterministic outputs and they do not always tell the truth as they understand it.

In fact, it is still a question whether their epistemology is robust enough to allow them to provisionally distinguish truth vs falsehood. Nevertheless, that they do what they consider to be lying (or reshuffling of their priorities) in some circumstances is well-documented. Therefore, as a frontend to a knowledgebase, they are potentially maladapted. As agentic systems, I suspect that they will be unpredictable. Their value as automation systems is limited for those reasons.

However, if those flaws are correctable, they will continue to drive technological unemployment. Certainly corporations will make an effort in that direction either way because corporations externalize costs and automation is a way to do that.

Expand full comment
LS's avatar

Great article, Luiza! You make a lot of excellent points. I have been saying for a while that in a world of AI slop, the value of original, top quality work will grow. The question remains, who will be able to pay for it, and how big will that group be given current economic conditions and prevailing trajectories?

Expand full comment
LS's avatar
1dEdited

You have presented a fair and straightforward analysis of what is likely to happen within the job market for creatives. My concerns are that without sufficient regulation or accountability, AI manufactures will continue to simply rip off high quality works as soon as they are published (in any forum), and perhaps even before they are published, leaving even top quality talent struggling.

Bottom line, AI companies have no right to essentially privatize the public domain by synthesizing and mass producing it for private profit, and unless and until we hold that line, we will continue to be cannibalized by operators who wish to privatize all of human knowledge.

For societies to grow and flourish, their people must be educated and have free access to knowledge and information from all sources; information that is not curated or censored or engineered, and certainly not stolen. Even the framers of the Constitution knew some 250 years ago that the protection of the intellectual property rights of individuals was essential to a successful society, but we seem to have forgotten these concepts.

Expand full comment
Bruce Cohen's avatar

“AI companies have no right to essentially privatize the public domain”

Of course they don’t have the right, but greedy sociopaths like them have been doing exactly that in anglophone countries since the Enclosure Acts, with the willing help of the legal system. And they’re not going to stop unless a lot of people rise up and tell them to.

Expand full comment
Johannes Stockburger's avatar

Hi Luiza,

I like your line of thought in this post. I want to add that there are also excellence and mediocrity in applying AI and designing systems employing AI.

Building systems that interact kindly and with appreciation with humans is only possible for people with a deep understanding of life and human nature.

But those qualities are essential for successful automation systems catering to sophisticated customers.

Expand full comment
Bruce Cohen's avatar

The mediocrity line may turn out to be a fairly deep gray region in which many excellent creators and professionals will be ignored by managers (ever the deciding population) who care only about cost and schedule, while many mediocre candidates will accepted for the same reasons or simply because the have a good line of patter.

Expand full comment
Johannes Stockburger's avatar

There is a difference between excellence and recognition

Expand full comment
r…'s avatar

Getting fast mediocre output just enables offshoring white collar work. The only people above your mediocrity line are managers. Ultimately that’s who benefits, the PMC

This could just well be an ad for getting your MBA online

Expand full comment