Key Points
- China’s Cyberspace Administration (CAC) has launched a three-month “Qinglang” campaign targeting the abuse of AI technology.
- The campaign aims to regulate AI services, prevent misuse like deepfakes and misinformation, protect user rights, and ensure “healthy development” of the AI industry.
- It is divided into two phases: Phase One focuses on governance at the source (non-compliant products, training data issues, security measures, content labeling), and Phase Two targets prominent misuses (rumors, false info, explicit content, impersonation, online manipulation).
- Key targets include the use of AI for “one-click undressing,” “AI prescribing medication,” “AI fortune-telling,” “online water armies,” and the impersonation of public figures or friends for fraud.
- The CAC emphasizes the need for stronger content review, technical detection, and public AI literacy education. Companies operating in China’s AI space must prioritize compliance.
If you’re watching the AI space in China, listen up.
The Cyberspace Administration of China (CAC) (Zhongyang Wangxinban 中央网信办) just rolled out a major nationwide campaign – the “Qinglang – Rectification of AI Technology Abuse” special action.
This isn’t just another announcement; it’s a clear signal about how China intends to manage the rapidly evolving world of Artificial Intelligence.

Resume Captain
Your AI Career Toolkit:
- AI Resume Optimization
- Custom Cover Letters
- LinkedIn Profile Boost
- Interview Question Prep
- Salary Negotiation Agent

What’s the Goal of the Qinglang AI Campaign?
Simply put, the CAC wants to:
- Regulate AI services and applications more tightly.
- Foster “healthy and orderly development” of the AI industry.
- Protect the rights and interests of Chinese citizens online.
This three-month campaign is split into two distinct phases.
Think of it as a one-two punch targeting AI governance from source to application.
An official from the CAC laid out the plan:
Phase One: Focuses on strengthening AI governance at the source.
- Cleaning up non-compliant AI applications.
- Beefing up management of AI-generated content and labeling.
- Pushing platforms to improve detection and authentication.
Phase Two: Targets specific, high-profile issues.
- Using AI to spread rumors and false information.
- Generating pornographic or vulgar content via AI.
- Impersonating others using AI tools.
- Deploying AI for “online water army” activities (coordinated manipulation).
- Penalizing non-compliant accounts, MCN agencies, and platforms.
Let’s break down the specific areas under scrutiny in each phase.
Phase One Breakdown: Cleaning Up AI at the Source (6 Key Areas)
The CAC is hitting these points hard in the initial phase:
-
Non-compliant AI Products:
- Offering public generative AI services without proper filing or registration for large models.
- Including features that violate laws or ethics, like notorious “one-click undressing” tools.
- Unauthorized cloning or editing of biometric data (voice, face) – a major privacy violation.
-
Teaching and Selling Non-compliant AI Tools:
- Spreading tutorials on creating deepfakes (face-swapping, voice cloning) with illegal tools.
- Selling products like “voice synthesizers” or “face-swapping tools” that enable misuse.
- Marketing or hyping information about these non-compliant products.
-
Lax Management of Training Data:
- Using data that infringes on intellectual property or privacy rights.
- Incorporating false, invalid, or inaccurate web-scraped content.
- Sourcing data illegally.
- Lacking proper training data management and failing to clean up non-compliant data regularly. (This is huge for model integrity and bias).
-
Weak Security Management Measures:
- Failing to implement security measures (like content review, intent recognition) appropriate for the business scale.
- Lacking effective ways to manage non-compliant accounts.
- Not conducting regular security self-assessments.
- Social platforms having unclear or lax rules for AI auto-reply services accessed via APIs.
-
Failure to Implement Content Labeling:
- Service providers not adding clear labels (implicit or explicit) to deep synthetic content.
- Not providing users with tools or prompts for explicit labeling.
- Distribution platforms failing to monitor and identify AI-generated synthetic content, leading to public confusion. (Transparency is key here).
-
Security Risks in Key Areas (Healthcare, Finance, Minors):
- Filed AI products offering Q&A in sensitive fields without targeted safety reviews and controls.
- Leading to dangerous issues like “AI prescribing medication,” “investment inducement,” or “AI hallucinations” that mislead users (especially students, patients) and disrupt financial markets.

Find Top Talent on China's Leading Networks
- Post Across China's Job Sites from $299 / role, or
- Hire Our Recruiting Pros from $799 / role
- Qualified Candidate Bundles
- Lower Hiring Costs by 80%+
- Expert Team Since 2014
Your First Job Post

Phase Two Focus: Tackling Prominent AI Misuses (7 Key Areas)
Once the foundations are addressed, the campaign shifts to specific harmful applications:
-
Using AI for Rumors:
- Fabricating rumors about current affairs, politics, social issues, emergencies, etc.
- Maliciously interpreting or speculating on major policies.
- Exploiting disasters by creating false narratives about causes or details.
- Impersonating official sources (press conferences, news reports) to spread rumors.
- Using AI cognitive biases to manipulate public opinion.
-
Using AI for False Information:
- Stitching together unrelated content to create misleading “mixed real-and-fake” info.
- Altering time, place, or people to recirculate old news as new.
- Creating exaggerated or pseudoscientific content in fields like finance, education, health.
- Using AI fortune-telling or divination to mislead and spread superstition.
-
Using AI for Pornographic/Vulgar Content:
- Generating non-consensual explicit content using “AI undressing” or “AI drawing.”
- Creating suggestive or soft-core pornographic images (“anime-style edge-play,” “ugly-style” trends).
- Generating bloody, violent, distorted, or terrifying imagery.
- Creating synthetic erotic stories, posts, or jokes.
-
Using AI for Impersonation and Illegal Activities:
- Using deepfakes (face-swapping, voice cloning) to impersonate public figures (experts, celebs) for deception or profit.
- Spoofing, smearing, or distorting representations of public or historical figures.
- Impersonating relatives/friends for online fraud.
- Improper use of AI to “resurrect the deceased” or abuse their information.
-
Using AI for Online Water Army Activities:
- Using AI for “account farming” (batch creating fake accounts).
- Using AI content farms/spinners to generate low-quality, homogenous content for traffic.
- Using AI group control software or bots for manipulating engagement metrics (likes, follows, comments) and creating fake trends. (A major headache for platforms and marketers).
-
Non-compliant AI Products, Services, Applications:
- Creating counterfeit or shell AI websites/apps.
- AI apps offering non-compliant features (e.g., tools auto-generating articles from trending topics, AI chat companions offering vulgar services).
- Selling or promoting non-compliant AI apps, services, or courses.
-
Infringing on Minors’ Rights:
- AI applications designed to be addictive for minors.
- Including content harmful to minors’ physical or mental health, even within designated “minor modes.”

ExpatInvest China
Grow Your RMB in China:
- Invest Your RMB Locally
- Buy & Sell Online in CN¥
- No Lock-In Periods
- English Service & Data
- Start with Only ¥1,000

Why This Matters: The CAC’s Stance and Future Outlook
The CAC official stressed the critical importance of this campaign.
They’re calling on cyberspace departments at all levels to step up, oversee platforms, and ensure these rectification measures stick.
This involves:
- Improving AI-generated content review mechanisms.
- Enhancing technical detection capabilities.
- Strengthening the promotion of AI policies and AI literacy education.
The message is clear: China is serious about managing the potential downsides of AI while trying to guide its development.
For anyone operating in the AI space in China – developers, platforms, investors – understanding and complying with these evolving regulations is crucial.
This “Qinglang” campaign sets a strong precedent for AI governance in China.
FAQs
What is the “Qinglang – Rectification of AI Technology Abuse” campaign?
It’s a three-month nationwide initiative by China’s Cyberspace Administration (CAC) aimed at regulating AI services, preventing misuse like deepfakes and misinformation, protecting user rights, and ensuring the healthy development of the AI industry in China.
Who does this campaign affect?
It affects AI service providers, platform operators (websites, social media), MCN agencies, developers using AI, and potentially end-users. Anyone creating, deploying, or hosting AI-driven services or content in China needs to pay attention.
What are some key examples of AI abuse being targeted?
Key targets include using AI to create deepfakes for impersonation or spreading rumors, generating non-consensual explicit content (“AI undressing”), spreading false information, using AI for coordinated online manipulation (“water armies”), and offering AI services in sensitive areas like health or finance without proper safeguards.
What happens if companies or individuals don’t comply?
The campaign notice mentions imposing penalties on non-compliant accounts, MCN agencies, and website platforms. This could range from content takedowns and account suspensions to potentially more significant regulatory actions depending on the severity and persistence of the violation.
References
- China Cyberspace Network (Zhongguo Wangxinwang 中国网信网) / Cyberspace Administration of China (CAC)
- Original Article: 中央网信办部署开展“清朗·整治AI技术滥用”专项行动 (Published April 30, 2024)