China issues drafts rules to regulate AI with human-like interaction

China issues drafts rules to regulate AI with human-like interaction
शेयर करे...


The draft lays out a regulatory approach that would require providers to warn users against excessive ‌use and to intervene when users ‌show signs of addiction [File]

The draft lays out a regulatory approach that would require providers to warn users against excessive ‌use and to intervene when users ‌show signs of addiction [File]
| Photo Credit: REUTERS

China’s cyber regulator on Saturday issued draft rules for ‍public comment that would tighten oversight of ​artificial intelligence services designed to simulate human ‌personalities and engage users ​in emotional interaction.

Winter Heating Solutions

Solimo 2000/1000 Watts Room Heater

Adjustable thermostat • ISI certified • Ideal for small to medium rooms

📦 Featured Product Recommendation

Below is a top-recommended product you can buy on Amazon — support us by using the link below (we earn a commission at no extra cost to you).

🔗 Buy on Amazon

* Shop with confidence — Amazon Affiliate Link

Amazon Product Image
As an Amazon Associate, we earn from qualifying purchases.* 1
Check Price on Amazon

RR Signature WARMAXX Room Heater

Dual heating mode • Overheat protection • 5-level safety • 2 year warranty

View on Amazon

Borosil 2000W Novus Electric Fan Heater

Made in India • Variable temperature • Horizontal & vertical placement

See Details on Amazon
*Affiliate links. No extra cost to you.

The move underscores Beijing’s effort to shape the rapid rollout of consumer-facing AI by strengthening safety and ethical requirements.

The proposed rules would apply to AI products and services offered to ​the public in China that ⁠present simulated human personality traits, thinking patterns and communication styles, and interact with users emotionally through ​text, images, audio, video ⁠or other means.

The draft lays out a regulatory approach that would require providers to warn users against excessive ‌use and to intervene when users ‌show signs of addiction.

Under the proposal, service providers would be ‍required to assume safety responsibilities throughout the product lifecycle and establish systems for ‍algorithm review, data security and personal information protection.

The draft also targets potential psychological risks. Providers would be expected to identify user states and assess users’ emotions and their level of dependence on the service. If users are found ⁠to exhibit extreme emotions or addictive behaviour, providers should take necessary ​measures to intervene, it said.

The measures set content ⁠and conduct red lines, stating that services must not generate content that endangers national security, spreads rumours or promotes violence or obscenity.



Source link


शेयर करे...

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!