AI Toys and the Domestication of Culture
OpenAI and Mattel have announced a partnership to develop AI-powered toys, giving us a glimpse into AI companies' strategies for the domestication of culture | Edition #212
👋 Hi, Luiza Jarovsky here. Welcome to our 212th edition, now reaching 64,100+ subscribers in 168 countries. To upskill and advance your career:
AI Governance Training: Apply for a discount here
Learning Center: Receive free AI governance resources
Job Board: Find open roles in AI governance and privacy
AI Book Club: Discover your next read and expand your knowledge
Become a Subscriber: Read all my analyses, no paywalls:
🏝️ Ready for Summer Training?
If you are looking to upskill and explore the legal and ethical challenges of AI and the EU AI Act in an in-depth and interactive way, I invite you to join the 22nd cohort of my 15-hour live online AI Governance Training, starting in mid-July (5 seats left).
Cohorts are limited to 30 people, and over 1,200 professionals have already participated. Many have described the experience as transformative. This could be the career boost you need! [Apply for a discounted seat here].
AI Toys and the Domestication of Culture
A few days ago, Mattel (the toymaker behind Barbie, Hot Wheels, and other popular brands) and OpenAI announced a partnership with the goal of developing “AI-powered products and experiences.”
In today's edition, I discuss what this partnership means for Mattel and OpenAI, and how it gives us a glimpse into AI companies’ domestication of culture strategy.
-
According to Mattel's press release:
“The agreement unites Mattel’s and OpenAI’s respective expertise to design, develop, and launch groundbreaking experiences for fans worldwide. By using OpenAI’s technology, Mattel will bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy, and safety.”
(Interestingly, OpenAI's press release does not mention the emphasis on privacy and safety; I wonder why.)
Before I continue, I would like to comment on Mattel's unfortunate word choice. It says it will bring “the magic of AI” to play experiences, as if AI were some new Disney movie or trending fiction series meant to entice a child's imagination.
No, AI is not magic. Also, there is nothing childlike, playful, or developmental about AI.
AI is a tool. An automation tool, with varying levels of autonomy, whose performance will directly depend on how it was trained, that constantly learns from its environment, and whose outputs can be highly unpredictable.
Why is Mattel aligning itself with one of the most powerful AI companies in the world?
Mattel wants to present itself as an “AI-first“ company. Notice how Mattel is not announcing a specific toy with specific age-appropriate AI features. It is just telling the world that, even as a toymaker, it also follows the AI dogma and will build toys with AI.
Mattel and many other “AI-first” companies treat AI as an end, not a means. As an inherently positive goal that should be sought internally and targeted by every employee, who should be “AI fluent.”
Now let's move to OpenAI.
-
For them, Mattel and AI toys are one more step in their strategy of domestication of culture.
AI companies, including OpenAI, Google, and Meta, understood that domesticating the culture is an essential step for AI companies to grow fast, steadily, and accompanied by loyal (often dependent) users. So they have been working hard on it over the past 2.5 years.
Through hype, marketing, big announcements, partnerships, ubiquitous integrations, strong narratives, and viral trends, AI becomes normalized, desirable, and a precondition to fully enjoy society and the conveniences of digital life. Instead of a tool or a means, AI becomes the goal.
People start seeing AI functionalities everywhere they look, including their email, search engine, social networks, smartphones, computers, glasses, cars, appliances, and soon even toys. AI becomes part of life.
People are encouraged to use AI for simple tasks they once did using their own brains. Sometimes AI features are implemented by default, and there is little to no escape.
You open Gmail, and the email draft window says “Help me write,” prompting you to use the AI chatbot instead of writing yourself. You open the search engine, and the AI-oracle will output its detailed, monolithic answer at the top (which, despite ‘hallucination’ rates, most people will not check sources or question it).
Meta says that people should overcome the stigma and start ‘befriending’ AI chatbots. Forget about the root causes of the loneliness epidemic, or the risks around AI anthropomorphism (which have already led to death): AI is the utmost goal.
On social media, people post: “If you're not using AI, you are a loser,” even though recent studies have shown that AI is not suitable for every task.
The workplace gets transformed: companies proclaim themselves “AI-first“ and require employees to be “AI fluent” (pay attention to how many don't talk about AI literacy or AI governance; words matter). “Show me how much AI you are using and prove that AI cannot replace you, or you are fired,” tech CEOs write in internal memos.
Thoughtful consideration, skepticism, or critical thinking about AI deployment is labeled unacceptable, as Zapier made clear.
The domestication of culture also has important legal implications: