China's Approach to AI Anthropomorphism
China's proposed AI law acknowledges AI-related human vulnerabilities and establishes contextual technical measures to prevent AI harms | Edition #264
This week's free edition is sponsored by AgentCloak:
Compliance teams are now requiring AI systems to operate with only the essential data, a priority in Europe under the EU AI Act. AgentCloak seamlessly cloaks and uncloaks sensitive data between AI clients and servers to ensure that AI systems only access the minimum amount of data they need to operate. Discover more at agentcloak.ai
China’s Approach to AI Anthropomorphism
China has recently launched a public consultation on its proposed new law on AI anthropomorphism, titled “Interim Measures for the Administration of Humanized Interactive Services Based on AI.”
Most people in Europe and the United States, including lawmakers and policymakers, do not seem to have paid much attention.
I have also not seen public debates or comparisons of the provisions of the proposed Chinese law with those of the EU AI Act or U.S. state laws that cover the topic.
The provisions of China’s proposed law on AI anthropomorphism, however, deserve attention because they:
Demystify the false idea that China does not regulate AI or that the only way to be a competitive player in the AI race is through radical deregulation or inattention to AI harms;
Offer a real-world example of a legal framework that acknowledges AI-related human vulnerabilities and proposes contextual technical measures to prevent AI-anthropomorphism-related harm.

Let us take a look at some of the proposed articles:
Article 2 defines “anthropomorphic interactive services”:
“This regulation applies to products or services that utilize AI technology to provide the public within the territory of the People's Republic of China with simulated human personality traits, thinking patterns, and communication styles, and engage in emotional interaction with humans through text, images, audio, video, etc.”
Defining AI anthropomorphism is an important step toward increasing awareness and regulating it more effectively. By comparison, the EU AI Act does not define anthropomorphism and has only a weak transparency obligation regarding it.
Article 3 announces the principle of “healthy development and governance”:
“The State adheres to the principle of combining healthy development with governance according to law, encourages the innovative development of anthropomorphic interactive services, and implements inclusive and prudent, classified and graded supervision of anthropomorphic interactive services to prevent abuse and loss of control.”
Innovation is encouraged, but in an inclusive, prudent, and supervised way.
Article 7 prohibits anthropomorphic AI services from:
(i) Generating or disseminating content that endangers national security, damages national honor and interests, undermines national unity, engages in illegal religious activities, or spreads rumors to disrupt economic and social order;
(ii) Generating, disseminating, or promoting content that is obscene, gambling-related, violent, or incites crime;
(iii) Generating or disseminating content that insults or defames others, infringing upon their legitimate rights and interests;
(iv) Providing false promises that seriously affect user behavior and services that damage social relationships;
(v) Damaging users’ physical health by encouraging, glorifying, or implying suicide or self-harm, or damaging users’ personal dignity and mental health through verbal violence or emotional manipulation;
(vi) Using methods such as algorithmic manipulation, information misleading, and setting emotional traps to induce users to make unreasonable decisions;
(vii) Inducing or obtaining classified or sensitive information;
(viii) Other circumstances that violate laws, administrative regulations and relevant national provisions.
This article heavily regulates AI anthropomorphism, ensuring it follows responsible AI and compliance standards and expressly limiting its potential applications. (If it were an EU AI Act provision, many in the U.S. would say that the EU had lost its mind).
Articles 8 and 9 focus on the AI provider's responsibility for the security of their anthropomorphic AI services, from development through design, marketing, and oversight:
"Providers shall fulfill their primary responsibility for the security of anthropomorphic interactive services, establish and improve management systems for algorithm mechanism review, scientific and technological ethics review, information release review, network security, data security, personal information protection, anti-telecom and network fraud, major risk contingency plans, and emergency response, have secure and controllable technical safeguards, and be equipped with content management technologies and personnel commensurate with the product scale, business direction, and user group.”
-
“Providers shall fulfill their security responsibilities throughout the entire lifecycle of the anthropomorphic interactive service, clearly define the security requirements for each stage such as design, operation, upgrade, and termination of the service, ensure that security measures are designed and used synchronously with service functions, improve the level of inherent security, strengthen security monitoring and risk assessment during the operation phase, promptly identify and correct system deviations and handle security issues, and retain network logs in accordance with the law.”
-
“Providers should possess safety capabilities such as mental health protection, emotional boundary guidance, and dependency risk warning, and should not use replacing social interaction, controlling users’ psychology, or inducing addiction as design goals.”
The proposed AI law makes clear that AI providers are primarily responsible for the security and safety of anthropomorphic AI systems, including with respect to mental health and emotional manipulation.
Articles 11 and 12 cover minors and the elderly:
“Providers shall establish a minor mode and provide users with personalized safety settings options such as switching to minor mode, regular real-time reminders, and usage time limits.
When providing emotional companionship services to minors, providers should obtain the explicit consent of the guardians; they should also provide guardian control functions, allowing guardians to receive real-time safety risk alerts, view summary information on minors’ use of the services, and set settings such as blocking specific roles, limiting usage time, and preventing recharge and consumption.(…)”
-
“Providers shall guide elderly people to set up emergency contact persons for their services. If any elderly person is found to be in danger of losing their life, health or property during the use of the service, the provider shall promptly notify the emergency contact person and provide social and psychological assistance or emergency relief channels.
Providers are prohibited from providing services that simulate the relatives or specific relationships of elderly users.”
The law acknowledges that minors and the elderly are among the most vulnerable groups affected by AI anthropomorphism and establishes concrete, vulnerability-specific measures to protect these groups.
Articles 16, 17, and 18 contain transparency measures and acknowledge the possibility of emotional dependency:
"Providers shall clearly indicate to users that they are interacting with artificial intelligence rather than a natural person.
When a provider identifies that a user is overly dependent or addicted, or when a user uses the service for the first time or logs in again, the provider should dynamically remind the user through pop-up windows or other means that the interactive content is generated by artificial intelligence.”
-
"If a user uses the anthropomorphic interactive service continuously for more than 2 hours, the provider shall dynamically remind the user to pause the use of the service through pop-up windows or other means.”
-
“When providing emotional companionship services, providers shall provide convenient exit methods and shall not prevent users from voluntarily exiting. When a user requests to exit through buttons, keywords, or other means in the human-computer interaction interface or window, the service shall be stopped promptly.”
The provisions above recognize that intensive use of anthropomorphic AI systems is a very important risk factor (as recent cases of suicide and mental health harm in the U.S. show) and establish clear and contextual design rules to protect users.
Lastly, for those who might say that this law has no teeth, Article 29 establishes penalties:
"If a provider violates the provisions of these Measures, the relevant competent department shall impose penalties in accordance with the provisions of laws and administrative regulations; if there are no provisions in laws and administrative regulations, the relevant competent department shall, in accordance with its responsibilities, issue a warning or issue a notice of criticism and order rectification within a specified period; if the provider refuses to rectify or the circumstances are serious, it shall be ordered to suspend the provision of relevant services.”
You can read all the provisions of the proposed law here.
-
To my knowledge, no AI law anywhere in the world regulates anthropomorphic AI systems with this level of detail, strictness, and concern for context-specific vulnerabilities and potential risks.
Existing AI laws in the EU and the U.S., including the EU AI Act, are often overly abstract and contextually vague, leaving the door open for AI companies to do the least possible, deploy dark patterns, avoid compliance, and abuse the system. (I teach this in depth in my AI Governance Training).
I am not naive. China explicitly stated that it wants to win the AI race, and it will enforce this law strategically.
Additionally, it has political and economic systems that differ from those of the United States and EU countries. It is probably easier to control and enforce tighter rules, standards, and obligations there.
However, this proposed law offers a great example for states, countries, and regions that want to focus on context-specific risks, the protection of vulnerable users (especially minors, the elderly, and people with underlying mental health issues), and the prevention of context-specific AI harm.
It also establishes that AI providers will be held accountable for the harms caused by their anthropomorphic AI systems, an important factor in fostering compliance and safer AI systems.
AI regulation, especially when anthropomorphic AI systems are involved, works better when its provisions are clear, specific, protect vulnerable groups, and foster accountability.
*I know many of my readers are lawmakers, policymakers, and AI ethics advocates in the United States, Europe, and various other countries. I hope you enjoyed this article and that it will serve as a source of inspiration to improve existing regulatory frameworks for AI.




this is a very intetesting read! maybe you can check out our concept of AI [re]Generation where we merge AI with natural systems like Mycelium to create digital nervous systems for the economy…
This is really insightful. As an AI business founder we are putting ethical system design at the heart of how we operate. Our business works with parents, supporting them in their children's education. Our values are Autonomy, Safety, Clarity and Support. We are not looking to regulation to provide the boundaries and then work to those boundaries. We will set our own boundaries according to what we intend to offer, who we are working with, a clear-sighted view of the risks and following the value of Safety. We will also embed natural principles such as feedback, adaptation development and growth as AI risks and potential continually evolve, likely faster than regulation.
I would be interested to know what other AI business owners/founders are doing in this regard