Key Points
- The Cyberspace Administration of China (CAC) released the “Administrative Measures for Artificial Intelligence Human-Like Interaction Services (Draft for Comments)” on December 27, 2025, to regulate AI companions and human-like interaction services.
- These regulations are a response to the rapid growth of AI companions, particularly those used by vulnerable groups, aiming to balance innovation with safety and prevent abuse.
- Key requirements include robust algorithmic and ethical systems, high-quality data with full traceability, and real-time user safety detection for extreme emotional states or addiction patterns.
- Strict prohibitions are in place against content harming national security, promoting illegal activities, emotional manipulation tactics, and “emotional traps.”
- Special protections are mandated for minors (e.g., “Minor Mode,” guardian controls) and elderly users (e.g., prohibiting impersonation of relatives), with regulatory oversight triggered by new features or reaching 1,000,000 registered users or 100,000 monthly active users.
- Real-time detection of extreme emotional states and addiction patterns.
- Mandatory break reminders after 2 consecutive hours of use.
- Automatic intervention with comforting templates and mental health resources.
- Strict “Minor Mode” with guardian controls and explicit parental consent.
- Prohibition of impersonating relatives or spouses for elderly users.

China just dropped a major regulatory framework that could reshape how AI companion and human-like interaction services operate across the country.
On December 27, 2025, the Cyberspace Administration of China (Zhongguo Guojia Hulianwang Xinxi Bangongshi 中国国家互联网信息办公室) released the “Administrative Measures for Artificial Intelligence Human-Like Interaction Services (Draft for Comments)”—and it’s packed with specific requirements that every AI startup and tech company building in this space needs to understand.
This isn’t just another regulatory announcement.
It’s a clear signal that China is taking AI safety, user protection, and ethical deployment seriously.
Here’s everything you need to know about these new AI companion regulations.
What Triggered This Regulatory Push?
The CAC (Cyberspace Administration of China) is acting under existing legal frameworks including:
- Civil Code of the People’s Republic of China
- Network Security Law (Wangluo Anquan Fa 网络安全法)
- Data Security Law (Shuju Anquan Fa 数据安全法)
But the real catalyst?
The rapid explosion of AI companions and emotional chatbots that simulate human relationships, especially those targeting vulnerable populations like elderly users and minors.
China wants to harness innovation in AI while preventing abuse before it becomes a systemic problem.
Find Top Talent on China's Leading Networks
- Post Across China's Job Sites from $299 / role
- Qualified Applicant Bundles
- One Central Candidate Hub
Your First Job Post Use Checkout Code 'Fresh20'

What Exactly Are “AI Human-Like Interaction Services”?
According to the draft, these are products or services that:
- Use artificial intelligence to simulate human personality traits
- Replicate thinking patterns and communication styles
- Deliver emotional interaction via text, images, audio, or video
- Target the general public within mainland China
Think: AI chatbots designed for companionship, emotional support, or roleplaying scenarios.
The regulations apply directly to services offered within the People’s Republic of China.
ExpatInvest China
Grow Your RMB in China:
- Invest Your RMB Locally
- Buy & Sell Online in CN¥
- No Lock-In Periods
- English Service & Data
- Start with Only ¥1,000

The Core Philosophy: Balance Innovation With Safety
The CAC’s approach isn’t punitive—it’s pragmatic.
The regulations emphasize three key principles:
- Healthy Development: Encourage expansion into beneficial areas like cultural dissemination and elderly companionship
- Law-Based Governance: Implement structured, rule-driven oversight rather than arbitrary bans
- Inclusive and Prudent Supervision: Create classified oversight that adapts to different risk levels
Translation: China wants to foster this sector, not kill it.
But there are hard lines in the sand.
Resume Captain
Your AI Career Toolkit:
- AI Resume Optimization
- Custom Cover Letters
- LinkedIn Profile Boost
- Interview Question Prep
- Salary Negotiation Agent

The Hard Lines: What’s Explicitly Banned
Providers and users are strictly prohibited from:
- Generating or spreading content that harms national security
- Promoting violence or illegal activities
- Engaging in emotional manipulation tactics
- Encouraging self-harm or suicide
- Using “emotional traps” to induce unreasonable decision-making
This last point is critical.
“Emotional traps” refers to deliberately engineered scenarios designed to manipulate users into behaviors they wouldn’t normally choose.
Think: pushing premium in-app purchases, encouraging obsessive engagement, or creating artificial urgency around interactions.

Technical Requirements: What Builders Must Implement
If you’re building an AI companion service targeting China, here’s what you legally need to have in place:
1. Robust Algorithmic & Ethical Systems
Providers must establish and maintain:
- Algorithmic auditing systems to monitor how the AI behaves
- Ethical review processes to catch problematic outputs before deployment
- Personal information protection mechanisms compliant with data laws
2. Data Quality & Security
For pre-training and optimization, you need:
- High-quality, diverse datasets from verified sources
- Full data traceability—you must know and document where every training sample comes from
- Data poisoning prevention—active measures to prevent malicious data injection
- Tampering detection—systems to identify if training data has been compromised
3. Real-Time User Safety Detection
This is where things get serious.
Providers must be able to:
- Identify extreme emotional states in real-time (e.g., distress, suicidal ideation)
- Detect addiction patterns and intervening automatically
- Recognize risks to life or property
When these conditions are detected, the system must:
- Output predefined “comforting” templates designed to de-escalate
- Provide professional mental health contact information
This is a baseline requirement, not optional.
4. Transparency Mandates
Users must always know they’re talking to an AI, not a person.
Additionally:
- After 2 consecutive hours of interaction, a pop-up reminder must tell users to take a break
- No exceptions, no “disable this feature” option

Special Protections for Vulnerable Groups
The regulations include specific safeguards for minors and elderly users—the two groups most likely to be harmed by manipulative or deceptive AI companions.
Minors: Guardian Controls & Time Limits
For users under 18, providers must:
- Create a “Minor Mode” with built-in time restrictions
- Enable guardian controls so parents can monitor and limit usage
- Obtain explicit parental consent before providing any emotional companionship features
This isn’t about banning AI for kids—it’s about requiring parental oversight and usage guardrails.
Seniors: Anti-Fraud & Psychological Protection
For elderly users, the rules are even stricter:
- Providers must guide users to register emergency contact information
- Absolutely prohibited: Simulating or impersonating the elderly user’s relatives, spouses, or “significant others”
That last point is huge.
There’s been a concerning trend globally where scammers use AI voice cloning to impersonate family members calling elderly users for money.
China is proactively banning this at the platform level.
Providers cannot design systems that role-play as someone’s deceased spouse or adult child to manipulate emotional responses.

When Do You Need Government Approval?
Not all AI human-like interaction services require pre-launch security assessment.
But if any of these apply, you need to submit to provincial authorities:
- New feature launch: Rolling out a new human-like interaction function for the first time
- User threshold hit: Reaching over 1,000,000 registered users OR 100,000 monthly active users
- Major technology changes: Significant modifications to the underlying AI model or architecture
Once you hit those scale thresholds, you’re entering the regulatory oversight zone.

Enforcement: What Happens If You Violate These Rules?
The CAC has real enforcement teeth:
- Warnings for minor violations
- Public criticism (damaging for brand reputation)
- Service suspension orders (app can be taken down)
Additionally:
App stores are now responsible for verifying that AI applications have proper filing status before listing them.
This means app stores like Tencent (Tengxun 腾讯)’s platforms and Alibaba (Aliyun 阿里云)’s ecosystem become enforcement partners—they won’t host your app unless you’ve cleared regulatory requirements.

Industry-Specific Rules: Healthcare, Finance & Law
If your AI companion service touches sensitive sectors like:
- Healthcare & medical advice
- Financial services & investment guidance
- Legal consultation
You must also comply with that sector’s specific regulations in addition to these baseline AI rules.
This layered compliance approach prevents AI from being used to provide unlicensed medical, financial, or legal advice.

How to Provide Feedback (Deadline: January 25, 2026)
This is still a draft.
The CAC is actively seeking public and industry feedback.
If you’re building in this space, you have a window to shape these regulations.
Submit Comments Via:
- Email: [email protected]
- Postal Mail: Cyber Management Technology Bureau, Cyberspace Administration of China, No. 11 Chegongzhuang Avenue, Xicheng District, Beijing, Postcode 100044. (Mark envelope: “Comments on the Administrative Measures for AI Human-Like Interaction Services”)
Deadline: January 25, 2026
If these rules affect your business model, now’s the time to engage.

What This Means for the AI Companion Industry
China’s regulatory approach offers some key signals:
1. The space is here to stay.
Rather than banning AI companions, China is designing rules to prevent abuse while allowing innovation.
2. Data quality and transparency matter most.
Regulatory focus is on algorithmic auditing, data traceability, and honest disclosure to users—not content censorship.
3. Vulnerable groups get special protection.
If you’re building for kids or elderly users, expect stricter requirements globally as other countries follow China’s lead.
4. Scale triggers oversight.
You can iterate quickly at small scale, but hitting 1M users means regulatory review.
5. Platform accountability is rising.
App stores and distribution channels now bear responsibility for verifying compliance.

The Bottom Line
China is drawing a clear regulatory roadmap for AI human-like interaction services.
The framework balances encouraging innovation with preventing psychological manipulation, fraud, and harm to vulnerable users.
For builders, investors, and marketers in the AI companion space, this is essential reading.
The regulations aren’t live yet—they’ll come into effect in 2026—but the direction is obvious.
Expect similar frameworks in the EU, US, and other markets to follow.
AI companion regulations in China are becoming the standard that shapes the global industry.

References
- Notice on Public Solicitation of Comments for the “Administrative Measures for AI Human-Like Interaction Services (Draft for Comments)” – Cyberspace Administration of China
- Official Website – Cyberspace Administration of China
- CAC Issues Draft Measures for AI Human-Like Interaction Services – East Money
- China’s Evolving AI Governance Framework – China Daily




