1. Use cases of agentic AI

In recent months, OpenClaw and other agentic AI have marked a watershed moment in the practical use of AI. Unlike conventional SaaS AI chatbots or traditional agents, these AI agents leverage multi-agent cooperation across devices, enhance autonomous decision-making, and significantly boost efficiency and productivity, while drastically cutting operational costs and offering clear operational advantages.

However, this technological shift also carries significant legal implications and creates new compliance obligations. For MNCs operating in China, deploying agentic AI may expose companies to risks under applicable Chinese laws and regulations.

2. Regulatory regimes and risks

Under applicable Chinese laws and regulations, deploying agentic AI falls under multiple legal regimes, including (a) AI and algorithm regulations; (b) data protection and cybersecurity rules; (c) allocation of liability and responsibility arising from the involvement of third parties in deploying AI agents; and (d) the need for HR management measures to address employees’ individual and uncontrolled use of AI agents.

2.1. AI and algorithm regulations

Chinese regulators have developed comprehensive regulatory frameworks for AI and algorithm technologies, with the current framework built around the following three major regulations: (a) Provisional Measures for the Administration of Generative Artificial Intelligence Services (the GAI Measures); (b) Administrative Provisions on Algorithm Recommendation for Internet Information Services (the Algorithm  Provisions); and (c) Administrative Provisions on Deep Synthesis of Internet-based Information Services (the Deep Synthesis Provisions) . In addition, Chinese regulators have issued primary guidance on developing and providing GAI services , namely, the Basic Security Requirements for Generative Artificial Intelligence Service, as well as requirements on watermarking and labelling of generated content.

Most of the above regulations target GAI service providers that develop and provide such services. Companies must draft and execute service and data processing agreements with external users, clearly defining rights, obligations, and responsibilities, while implementing transparency principles by disclosing detailed GAI notification documents that explain the technology, rationales, use cases, and limitations. High-risk scenarios, such as facial recognition, automated decision-making, or processing sensitive personal information, must be identified and strictly limited or prohibited if risks are unavoidable. Given the comprehensive legal requirements to be applied, business entities should analyse the compliance risks on a case-by-case basis according to specific service scenarios.

Furthermore, if companies go beyond internal use and plan to package certain use cases as products to be provided to external parties (e.g., customers, vendors, third parties), this is likely to be regarded by Chinese regulators as provision to the public. Therefore, companies will be required to apply for and implement algorithm filing and GAI filing with Chinese regulators for products and services provided to external users and, for such purposes, to review and limit the models used in such products. In practice, only local companies or foreign companies with a local presence in China can apply for the filing.

Failure to meet these AI and algorithm requirements may result in fines of up to CNY 100,000 (approximately US$14,700), plus additional penalties including shutdown orders and revocation of business licences. More importantly, as agentic AI involves heightened IT security risks (see section 2.2 below) and civil liability exposure (see section 2.3 below), providing AI-agent-based services to external users carries significant legal risks at the current stage.

2.2. Data protection and cybersecurity

In addition to AI-specific regulations, agentic AI may also carry legal implications under the data protection and cybersecurity regime, primarily under the following three fundamental laws of China: (a) the Cybersecurity Law (the CSL); (b) the Data Security Law (the DSL); and (c) the Personal Information Protection Law (the PIPL).

  • Collection and processing of personal information

Where agentic AI is used  in processing sales and customer relationships, handling project execution, and managing administrative, HR, and financial matters, companies may collect and process personal information (PI), sometimes including sensitive personal information (SPI), of employees, vendors, and consumers.

Under the PIPL, processing PI requires business entities to notify data subjects and obtain explicit consent, while also promptly responding to data subject requests and implementing necessary security safeguards such as encryption and de-identification. In addition, as leveraging agentic AI may involve data sharing or entrusted processing with third-party AI model providers (e.g., via API), companies are also required to execute data processing agreements, implement controls over third-party providers, and conduct personal information protection impact assessments (PIPIA).

Moreover, agentic AI relies on screenshot-based GUI/RPA processes and underlying interface calls, making them capable of accessing all user behaviours and content. Therefore, PI is not only collected from inputs and data being analysed, but also from data captured and transferred but not represented in outputs. Such “silent collection” needs to be incorporated into compliance review.
 

  • Cross-border data transfer

As agentic AI rely on the capabilities of large language models (LLMs), if an agent’s Model Context Protocol (MCP)  connects to an overseas closed-source LLM or to an LLM deployed on overseas servers, data transferred via API connections may constitute cross-border data transfer (CBDT) under the PIPL and relevant Chinese laws and regulations, thereby triggering stringent compliance requirements, which mainly include notification to and consent of data subjects, preparation of a PIPIA  report, and execution of data processing agreements with overseas data recipients.

In addition, if the number of data subjects whose PI is subject to CBDT exceeds certain thresholds set by Chinese regulators (e.g., involving more than 10,000 individuals’ PI or involving SPI), companies must select and implement Chinese CBDT compliance mechanisms (regulator-led security assessment, Standard Contractual Clauses (SCC)  filing, and third-party certification).

  • Security incidents and data breaches

By deploying agentic AI, companies introduce risks of incidents and attacks against their information systems that may compromise data protection and cybersecurity.

In Q1 2026 alone, nearly 100 vulnerabilities were publicly disclosed across major agentic AI. Certain vulnerabilities allow attackers to completely hijack locally deployed AI agents remotely by inducing users to visit malicious web pages. More importantly, AI agents’ automation relies heavily on access to and permissions over internal systems and data resources, while lacking proper isolation and audit capabilities. This gives attackers the ability to take over information systems quickly by targeting AI agents.

In addition to these high-risk vulnerabilities, the existence of malicious plugins (or even SKILL.md documents) and prompt injection attacks (which can be hidden in ordinary internet sources) may further increase the risk of security incidents. Furthermore, LLM “hallucinations” may lead to erroneous decisions during automation – for example, accidental deletion of data or other erroneous actions – causing irreversible damage.

Since last month, the Ministry of Industry and Information Technology has issued multiple risk warnings on deploying AI agents, recommending strict control of internet exposure, minimisation of permissions, and establishment of long-term protection mechanisms, such as log audits and vulnerability patching.

Under applicable Chinese regulations, companies are required to develop, implement, and carry out drills based on emergency plans, which must be initiated immediately upon detection of an incident. Companies must report security incidents to regulators within four hours of detection. Any incidents involving critical information infrastructure or major data security incidents are subject to even shorter reporting deadlines. In addition, any security incidents that may affect data subjects must be promptly notified to them, with notifications covering the details of the incident, potential harm, and mitigation measures.

Companies are also required to implement ongoing security measures – including encryption, access control, intrusion detection, data de-identification, and employee training – to ensure data protection and cybersecurity. These measures should be regularly audited and updated to address evolving threats.

2.3. Liability and risk allocation

When agentic AI can act on their own and make mistakes, they may cause financial losses (e.g., errors in investment decision-making leading to millions of dollars in losses), operational failures (e.g., wrongfully and irreversibly deleting important professional correspondence with key clients), data breaches (e.g., being hijacked and compromising critical internal information systems), or even regulatory violations (e.g., agents with access to public accounts being affected by hallucinations and posting unlawful content online).

In such scenarios, liability is distributed across a complex web of agent developers, LLM providers, deploying companies, and individual employees using AI agents. Because AI agents’ use cases – and their failures – extend far beyond the traditional “principal-agent” structure between human beings, existing rules of agency, trust, joint liability, product liability, and fiduciary duties cannot keep pace with these emerging realities. As the law invariably lags behind technology, current regimes do not provide clear-cut answers as to who bears liability when an AI agent goes awry. In response to this uncertainty, carefully drafted contractual clauses (e.g., indemnity clauses) can be critical in potential disputes.

In reviewing contractual clauses for agentic AI, companies may wish to consider requiring disclosure of AI components and warranties against backdoors and legal non-compliance, along with indemnification clauses for losses arising from AI use. Additionally, contracts should disclose AI involvement in creating client deliverables and include liability caps (without explicit warranties) for damages caused by unreviewed agent outputs.

2.4. HR management and internal control

Given their automated nature and the risks outlined above, unauthorised use of agentic AI by employees must be strictly managed (if not prohibited) through clear policies, mandatory training, and real-time monitoring, especially in light of heightened compliance and security exposure.

To this end, and specifically for shadow agent scenarios, the National Technical Committee 260 on Cybersecurity of Standardization Administration of China (SAC/TC260) recently published security guidance on the deployment and use of “OpenClaw-like agents” for public consultation (documentation No. TC260-PG-2026NA). Companies are recommended to establish a management system that defines prohibited behaviours, approval processes, and usage boundaries for internal AI use, along with an agent asset registration form to record deployment details, responsible persons, and access scopes for each approved agent. It is also suggested that employee handbooks be updated to explicitly prohibit shadow agents and that regular security training be conducted, thereby limiting vicarious liability while preserving disciplinary rights. Additionally, companies should deploy a shadow agent detection mechanism to scan internal networks and analyse traffic logs for unauthorised AI activity, while logging all agent actions to ensure auditability and traceability within the company’s IT environment.

3. Compliance measures

Agentic AI is not only a useful technology for companies’ internal workflows, but also highly regulated algorithmic infrastructure. For MNCs and other businesses operating in the Chinese market, compliance can no longer rely solely on reactive content moderation or generic AI ethics policies. Instead, AI compliance strategies for how agentic AI products and services operate and how AI agents are deployed in daily business operations should be reflected in management and technical measures, as well as in corporate policies, contract terms, and HR documentation.

Therefore, businesses are advised to consider the following measures to ensure compliance:

  • Conduct a security audit of all information systems that may interact  with AI agents, and issue a 30-day moratorium on new agent deployments pending the security audit.
  • Draft shadow agent prohibition clauses for inclusion in employee handbooks and IT policies, update those documents, and complete the mandatory processes according to applicable Chinese employment rules to ensure the amendments are valid and enforceable. Regular security training should be conducted for employees.
  • Review agreements with vendors, suppliers, customers, clients, and other third parties, identifying and updating AI-agent-related clauses and adding necessary protections through indemnities, liability caps, and warranty disclaimers.
  • Implement a compliance gap analysis across relevant regulatory regimes; identify, map, and prioritise compliance risks and gaps; and assign clear ownership and timelines for remediation.
  • Establish a management system and agent asset register, ensuring auditability and traceability of all agentic AI, and regularly update and maintain a shadow agent detection mechanism.

Client Alert 2026-094

Related Insights