17 Comments
User's avatar
Arthur Zangiev's avatar

this is a very intetesting read! maybe you can check out our concept of AI [re]Generation where we merge AI with natural systems like Mycelium to create digital nervous systems for the economy…

C.W.'s avatar

Thank you for talking about this.

Jaxon Coy's avatar

Thanks for sharing this, very good reference to think about.

Alex Papworth's avatar

This is really insightful. As an AI business founder we are putting ethical system design at the heart of how we operate. Our business works with parents, supporting them in their children's education. Our values are Autonomy, Safety, Clarity and Support. We are not looking to regulation to provide the boundaries and then work to those boundaries. We will set our own boundaries according to what we intend to offer, who we are working with, a clear-sighted view of the risks and following the value of Safety. We will also embed natural principles such as feedback, adaptation development and growth as AI risks and potential continually evolve, likely faster than regulation.

I would be interested to know what other AI business owners/founders are doing in this regard

Izabela Lipinska's avatar

I'm working on it. I have a worldwide IP and prior art in ontologically adequate architecture, where regulating anthropomophism is one of the priorities. I conduct advanced research in this area and design solutions that keep interaction and smooth communication with respecting human unique nature and…machine nature :)

Alex Papworth's avatar

Thanks you for sharing Izabela :-)

Lu's avatar

Such an insightful read!

The Bathroom Theorist's avatar

this is a useful and timely overview. two points seem especially important.

first, whatever one thinks of china’s political system, this draft does something that many western frameworks still struggle with: it treats anthropomorphic ai as a distinct design pattern with distinct risks. defining the category, naming emotional manipulation and dependency as design concerns, and imposing concrete interface obligations (clear disclosure, exit pathways, usage reminders, guardian controls) moves the debate from abstract “ai ethics” to product architecture. that shift alone deserves attention.

second, the emphasis on provider responsibility across the full lifecycle (from training data to post-deployment monitoring) closes a gap that often exists between high-level principles and day-to-day product decisions. requiring firms to build in mental health safeguards and dependency detection is far more specific than most current eu or u.s. provisions.

at the same time, a broader question remains: detailed compliance regimes tend to raise entry costs and favor larger, well-resourced providers. in practice, that can lead to greater market concentration, even if the intention is user protection. it will be worth watching whether this framework fosters safer diversity in the ecosystem or consolidates power in a small number of actors.

overall, though, the draft shows that “regulation versus innovation” is a false dichotomy. the more relevant question is whether rules are concrete enough to shape product design without becoming so burdensome that they reduce competition or adaptability.

on that metric, this proposal is at least a serious attempt to engage with the real psychological and social risks of human-like ai.

Randy's Wild Ride's avatar

It's obvious that people are abusing AI for criminal and other nefarious purposes. It's clear that we've moved into the realm where many people can be fooled by AI content and that is a serious problem. In many cases of deep fakes that improve every day, awareness and training will do little to allow people to discern truth when the fakes are so good. Ethical people should always put an AI notice on content and many do, but the unethical have no stake in ethics, so labeling won't work with them. So, what's to be done. Pandora's box has been opened and putting AI back in the box is highly unlikely. What actions are then left to curb AI abuses?

It's interesting that you're citing Chinese initiatives to curb the problem. I'm not sure why you're looking to one of the most controlling authoritarian states on the planet as a source for inspiration to solve this very real problem. I'll just cut to the chase and go right to Article 7.

Article 7 is, well, very authoritarian Chinese. Item 7(i) is a catchall that could include anything the State deems unacceptable. What does "endangers national security" mean? The State can declare anything as a national security issue. "Damages national honor and interests" is another open-ended item that covers anything the State deems as against their interests. What does honor mean? Talk about subjective. "Undermines national unity" that clearly gives the State right to squash any political dissent. "Engages in illegal religious activities" is another heavy-handed State tool of suppression. And just what is China's policy regarding religion? "Spreads rumors to disrupt economic and social order" Yet one more apparatus of State control.

The first clause gives the State the right to squash, enforce, and punish anyone at whim. Citing this as a potential model for regulating and enforcement of AI I think undercuts your argument. This is outright censorship with the full force of the State. Do you think these are good things? Are these the kinds of regulations you'd like to see?

7(ii) Well, "gambling-related" What the heck is that? "Incites crime" is pretty open-ended and vague.

7(iii) "Insults or defames others" is totally open-ended. What does it mean to insult some one? Hurt their feelings? Make fun of the way they look, act? Political cartoons have a very long history of doing just that. Hogarth and Daumier and Goya would totally fall to that one and all they had was a plate, stone, and printing press.

7(iv) This one sounds reasonable on face value, but again the open-endedness of "damage social relationships" is very broad.

7(v) "Encouraging, glorifying, or implying suicide or self-harm" if they stopped here that sounds very good, but they continue with "damaging users’ personal dignity". That's pretty subjective and echoes what I said about (iii). "Verbal violence" is another subjective notion. Does this appeal to the "speech is violence" crowd that advocated censorship what someone says something they don't like? Do you believe in that concept?

7(vi) This one compared to the rest is somewhat reasonable and speaks to the scamming you mentioned.

7(vii) Okay, except just what does "sensitive information" mean? How broad?

7(viii) "Other circumstances that violate laws, administrative regulations and relevant national provisions." In case they can't make any of the other ones stick, this is a blanked clause that can cover anything and everything the State wants.

"This article heavily regulates AI anthropomorphism, ensuring it follows responsible AI and compliance standards and expressly limiting its potential applications. (If it were an EU AI Act provision, many in the U.S. would say that the EU had lost its mind)." "Heavily" barely does the regulations justice because it says the State can basically shut down anything they don't like. No matter what someone says or does, it can be made to fit into one of these clauses.

Are you serious when you say, "follows responsible AI and compliance standards?" Responsible? Wow! Repressive is the word that more accurately describes Article 7. Yes, many in the US and Europe would think the EU lost their mind if they floated such authoritarian language and regulations. This is crazy talk. This is classic Chinese Communist Party control language.

As I said up front, there are clearly problems with AI abuse. No question about that. Are you seriously saying you endorse this sort of enforcement language? Do you suggest to address and resolve the very real problems to follow the lead of a draconian and highly repressive government that thinks nothing of jailing it's people if they step out of line and say something the State doesn't like?

AI abuse is a real and serious problem. It will require real and serious solutions. Citing what the Chinese are proposing is beyond belief to me living in a free society. If you think that's the way to fix the problem ... wow, I'm speechless. To the rest of you out there, do you honestly think this is the direction we should go towards AI regulation?

Kenneth E. Harrell's avatar

But is it really all that bad?

Is Anthropomorphizing AI really all that bad?

https://kennetheharrell.substack.com/p/is-anthropomorphizing-ai-really-all

Kenneth E. Harrell's avatar

Thank you for this paper this is fascinating. I may have more after a few more readings.

Sharik Currimbhoy Ebrahim's avatar

Love your article. I had written one about the Chinese language and the development of the brain (Chinese language readers develop different parts of the brain from Latin language readers) and the parallels in AI. Will dig it up and would love your feedback.

Gustavo Muñoz's avatar

People are not paying enough attention to all what China is offering, they stay deluded by the omnipresent anti-China propaganda.

Michael J. Goldrich's avatar

China's regulatory framework treats AI as a tool that needs human oversight at scale.

The West debates autonomy while they're building accountability directly into deployment.

I explore how organizations can prepare for this shift: vivander.substack.com/p/siri-will-soon-know-you-better-than

Briar Harvey's avatar

It certainly looks good. The problem is who China means when they talk about protected classes, considering they jail feminists and have no domestic violence shelters. Not to mention the 30 million unpartnered men.

China’s anthropomorphism problem is not the same as everyone else’s.

The Mighty Humanzee's avatar

My question would be are the CCP laws for their citizens, or do they apply in a manner that would protect a German using a Chinese product in Germany? TikTok abroad was barely regulated while it was restricted in China itself.