![The draft lays out a regulatory approach that would require providers to warn users against excessive use and to intervene when users show signs of addiction [File] The draft lays out a regulatory approach that would require providers to warn users against excessive use and to intervene when users show signs of addiction [File]](https://i1.wp.com/www.thehindu.com/theme/images/th-online/1x1_spacer.png?ssl=1)
The draft lays out a regulatory approach that would require providers to warn users against excessive use and to intervene when users show signs of addiction [File]
| Photo Credit: REUTERS
China’s cyber regulator on Saturday issued draft rules for public comment that would tighten oversight of artificial intelligence services designed to simulate human personalities and engage users in emotional interaction.
Solimo 2000/1000 Watts Room Heater
Adjustable thermostat • ISI certified • Ideal for small to medium rooms
📦 Featured Product Recommendation
Below is a top-recommended product you can buy on Amazon — support us by using the link below (we earn a commission at no extra cost to you).
🔗 Buy on Amazon* Shop with confidence — Amazon Affiliate Link
RR Signature WARMAXX Room Heater
Dual heating mode • Overheat protection • 5-level safety • 2 year warranty
View on AmazonBorosil 2000W Novus Electric Fan Heater
Made in India • Variable temperature • Horizontal & vertical placement
See Details on AmazonThe move underscores Beijing’s effort to shape the rapid rollout of consumer-facing AI by strengthening safety and ethical requirements.
The proposed rules would apply to AI products and services offered to the public in China that present simulated human personality traits, thinking patterns and communication styles, and interact with users emotionally through text, images, audio, video or other means.
The draft lays out a regulatory approach that would require providers to warn users against excessive use and to intervene when users show signs of addiction.
Under the proposal, service providers would be required to assume safety responsibilities throughout the product lifecycle and establish systems for algorithm review, data security and personal information protection.
The draft also targets potential psychological risks. Providers would be expected to identify user states and assess users’ emotions and their level of dependence on the service. If users are found to exhibit extreme emotions or addictive behaviour, providers should take necessary measures to intervene, it said.
The measures set content and conduct red lines, stating that services must not generate content that endangers national security, spreads rumours or promotes violence or obscenity.
Published – December 29, 2025 09:32 am IST
