DeepSeek and China’s AI Regulatory Landscape: Rules, Practice and Future Prospects

Dentons Dacheng
Contact

[co-author: Ken Dai]

During the 2025 Chinese New Year, DeepSeek, a Chinese artificial intelligence (“AI”) model, garnered intense global attention and sparked heated discussions. It surpassed ChatGPT, which had been in the spotlight previously, and topped the free APP download rankings on the Apple App Store in both China and the United States. This achievement indicates that China has been continuously exploring new frontiers in AI development and rising to higher levels. China has always regarded AI as a core element of its national strategy.. In recent years, it has promoted technological development through large-scale investment and policy support, and gradually established a globally leading regulatory framework. With the explosive growth of generative artificial intelligence (“GAI”) technology, China has become one of the world's pioneers in implementing systematic AI regulation.

As early as 2017, the State Council of China issued the New Generation Artificial Intelligence Development Plan, proposing a “three-step” plan for AI legislation: By 2020, China aimed to bring its overall AI technology and applications on par with the world's advanced level. The AI industry was expected to become a key pillar of economic growth, and the application of AI technology will become a new way to improve people’s livelihood. By 2025, China targeted significant breakthroughs in basic AI theories, with some technologies and applications reaching the world - leading level. AI was envisioned to be the primary impetus for China's industrial upgrading and economic transformation, and positive headway was to be made in building an intelligent society. By 2030, China aimed for its AI theories, technologies, and applications to generally reach the world - leading level, positioning the country as a major global AI innovation hub.

This article will analyze the current situation and trends of China’s AI regulation from multiple dimensions, including the core legislative framework, compliance requirements, law enforcement practice, and future prospects.

Core Legislative Framework

China’s AI regulatory system is founded on a multi-level framework laws and regulations, covering aspects such as data compliance, algorithm compliance, cybersecurity, and ethical review.

  1. Data Compliance

  1. Fundamental Law

  • Personal Information Protection Law of the People’s Republic of China (“PIPL”, effective in 2021)

  • Data Security Law of the People’s Republic of China (“DSL”, effective in 2021)

  1. Administrative Regulation

  • Regulation on Network Data Security Management (“NDSM”, effective in 2025)

  1. Cybersecurity

  1. Fundamental Law

  • Cybersecurity Law of the People’s Republic of China (“CSL”, effective in 2017)

  1. Administrative Regulation

  • Regulation on Network Data Security Management (effective in 2025) - clarifies that network data handlers providing AI-generated services should take effective measures to prevent and handle network security risks.

  1. Ethical Review

  1. Fundamental Law

  • Law of the People’s Republic of China on the Progress of Science and Technology (revised in 2022) - clarifies the national strategic status of AI technological innovation and requires strengthening the governance of scientific and technological ethics.

  1. Other Regulatory Document

  • Measures for Science and Technology Ethics Review (Trial) (effective in 2023) - applies to any scientific and technological activities involving ethical risks, including the research and development of AI technology. The research and development of algorithm models, application programs, and systems with the ability to mobilize public opinion and guide social awareness are included in the list of scientific and technological activities that require expert review.

  1. Algorithm Compliance

  1. Departmental Regulation

  • Provisions on the Security Assessment of Internet-based Information Services with Attribute of Public Opinions or Capable of Social Mobilization (“Provisions”, effective in 2018) - requires internet information service providers to carry out security assessments and submit security assessment reports to regulatory authorities.

  • Provisional Measures for the Administration of Generative Artificial Intelligence Services (“GAI Measures”, effective in 2023) - a core regulatory document, requiring service providers to fulfill obligations such as algorithm filing, security assessment, and content marking, and prohibits the generation of content that endangers national security or social stability.

  • Administrative Provisions on Deep Synthesis of Internet-based Information Services (“Deep Synthesis Provisions”, effective in 2023) - aiming at “deepfake” technology, it requires clear marking of synthetic content to prevent the spread of false information.

  • Administrative Provisions on Algorithm Recommendation for Internet Information Services (“Algorithm Recommendation Provisions”, effective in 2023) - prohibits algorithm abuse (such as inducing addiction or manipulating public opinion) and guarantees users’ right to know and choose.

  1. Local Regulatory

  • Regulations on Promoting the Development of Artificial Intelligence Industry in Shanghai (effective in 2022)

  • Regulations on the Promotion of Artificial Intelligence Industry in the Shenzhen Special Economic Zone (effective in 2022)

  • Fujian Province Artificial Intelligence Industry Development Project Management Measures (effective in 2024)

  • Guiding Opinions on Accelerating the Development of the Artificial Intelligence Industry in Zhejiang Province (effective in 2023)

  1. Other Regulatory Document

  • Regulations on the Identification of Artificial Intelligence-Generated Synthetic Content (Draft for Comment) - applies to network information service providers that meet the conditions specified in the Algorithm Recommendations Provisions, Deep Synthesis Provisions, and GAI Measures to carry out the identification of AI-generated synthetic content.

To further implement the above-mentioned laws and regulations, China has also successively introduced a series of national and industry standards and specifications:

For all industries:

  • Code of Ethics for the New Generation Artificial Intelligence (effective in 2021)

  • Basic Security Requirements for Generative Artificial Intelligence Services (“Basic Security Requirements”, effective in 2024)

  • Cybersecurity Technology – Labeling Method for Content Generated by Artificial Intelligence (Draft for Comment)

For the medical industry:

  • Regulatory Rules for Internet-based Diagnosis and Treatment (Trial) (effective in 2022)

  • Guiding Principles for the Classification and Definition of AI-based Medical Software Products (effective in 2021)

  • Guiding Principles for the Review of Artificial Intelligence Medical Device Registrations (effective in 2022)

For the intelligent connected vehicles industry:

  • Good Practice for the Administration of Road Tests and Demonstrative Application of Intelligent and Connected Vehicles (Trial) (effective in 2021)

In recent years, as global competition in the AI field has become increasingly fierce, and the impact of the development gap in AI has become more profound, the Chinese government has always maintained a positive attitude towards promoting and supporting the development of AI technology. In July 2024, the Construction Guidelines for the Comprehensive Standardization System of the National Artificial Intelligence Industry was released. It was proposed that by 2026, more than 50 national and industry standards would be newly formulated to standardize the technical requirements for the full process intelligentization of the manufacturing industry and the intelligent upgrading of key industries using AI technology.

Compliance Requirements

China's AI regulation adheres to the principles of “inclusive and prudent, classified and graded”. Under Chinese law, AI mainly refers to “generative artificial intelligence technology,” encompassing models and related technologies capable of generating content such as text, pictures, audio, and video. The GAI Measures primarily target generative AI service providers and stipulate a series of legal obligations, including algorithm compliance, content compliance, intellectual property compliance, training corpus compliance, and data annotation compliance.

  1. Algorithm Filing and Security Assessment

According to the GAI Measures and Algorithm Recommendation Provisions, AI services that directly provide the public within China with services having “public opinion attributes or social mobilization capabilities” should file the algorithm mechanism with the Cyberspace Administration of China (“CAC”). Services that require filing include, but are not limited to, AI services with functions such as text generation, picture generation, voice generation, and video generation, such as social recommendation and news generation. AI services without public opinion attributes or social mobilization capabilities do not need to be filed. AI service enterprises that fail to complete the filing may face service suspension, removal from the platforms, fines,revocation of business license, and even criminal liability.

According to the Provisions, AI services must pass a security assessment before filing for a record. The purpose of the security assessment is to ensure that algorithms do not endanger national security, public safety, or infringe upon users’ rights and interests. Services that fail to pass the assessment will be prohibited from being launched. For those already launched, rectification or taking them offline is required.

  1. Content Marking

Content compliance obligations include content labeling and synthetic content labeling. According to Article 12 of the GAI Measures, providers shall label the generated content such as images and videos in accordance with the Deep Synthesis Provisions. Articles 16, 17, and 18 of the Deep Synthesis Provisions clearly stipulate the requirements for content labeling:

  1. AI service providers shall take technical measures to add identifiers that do not affect user experience to the information content generated or edited using their services, and shall preserve log information in accordance with laws, administrative regulations, and relevant national provisions.

  1. When AI service providers offer the following deep-synthesis services that may cause public confusion or misidentification, they shall prominently label the generated or edited information content at a reasonable location or area to inform the public of the deep-synthesis situation.

  1. No organization or individual may use technical means to delete, alter, or conceal such labels.

  1. Science and Technology Ethics Review

In China, entities engaged in scientific and technological activities such as AI, if their research content involves sensitive fields of scientific and technological ethics, including the R&D of algorithm models, applications and systems with the ability to influence public opinion and societal awareness, should establish a science and technology ethics (review) committee and carry out scientific and technological ethics risk assessment and review work in accordance with the law.

  1. Data Compliance

The data compliance obligations of AI service providers and technical supporters (entities that provide technical services through application programming interfaces (“APIs”)) include compliance obligations for training data and data annotation, obligations for the protection of personal information in input data, obligations for the compliance review of output content, and other compliance obligations (covering the intellectual property review and protection of input and output content).

  1. Training data and data annotation

According to Article 5 of the Basic Security Requirements, compliance obligations for training data processing activities include corpus source security obligations, corpus content security obligations, corpus annotation security obligations, and the establishment of compliant lexicons and question banks.

  1. Personal information protection in input data

At the fundamental legal level, the PIPL strictly restricts AI's processing of personal information, requiring explicit user consent. To further implement these requirements in the AI field, GAI Measures stipulates that providers shall, in accordance with the law, fulfil their duty to protect the input information and usage records of users. They are not allowed to collect unnecessary personal information, illegally retain input information and usage records that can identify the user’s identity, or illegally provide the input information and usage records of users to others. In addition, Deep Synthesis Provisions stipulates that deep synthesis service providers and technical support providers, when providing functions for editing biometric information such as faces or voices, shall prompt users of deep synthesis services to inform the individuals whose information is being edited in accordance with the law and obtain their separate consent.

  1. Compliance review obligations for output content

Article 6 of the Algorithm Recommendation Provisions provides that algorithm recommendation service providers should adhere to mainstream values and actively promote positive values. Article 4 of the Deep Synthesis Provisions provides that when providing deep synthesis services, one should adhere to the correct political direction, public opinion orientation and value orientation, and promote deep synthesis services to be uplifting and virtuous. Article 4(1) of the GAI Measures provides that when providing and using generative AI services, one should comply with laws and regulations, respect social public morality and ethical standards, uphold core socialist values and refrain from generating content that incites the subversion of state power, overthrowing the socialist system, harming national security and interests, damaging the country's image, inciting national division, undermining national unity and social stability or promoting terrorism, extremism, ethnic hatred, ethnic discrimination, violence, obscenity or pornography, as well as false and harmful information prohibited by laws and regulations.

Law Enforcement Practice

While AI development has brought great convenience and assistance to our lives, it has also continuously raised a series of challenges that surpass existing legal boundaries, which has prompted China to accelerate legislative improvements.

  1. Judicial Practice:

  1. Infringement of personality rights

In 2021, the Beijing Internet Court ruled on a case of personality rights infringement involving the creation of virtual characters using celebrity images generated by AI software. On April 11, 2022, this case was recognized by the Supreme People’s Court as one of the “nine typical civil cases for the protection of personality rights under the Civil Code”. In this case, the defendant operated a mobile phone accounting software application. Users could create or add “AI companions” in it and customize their names, avatars, relationships with users, and how they addressed each other. The system allowed users to set interactive content with the AI companions through a function called “training”, enabling users to have dream-like conversations and interactions with these virtual idols. The plaintiff, a public figure, believed that the software had violated his portrait rights, name rights, and general personality rights. The court ultimately supported the plaintiff’s claims. Although the specific text and images were uploaded by users, the defendant’s product design and algorithm application actually encouraged user-generated content, which directly affected the core function of the software. Therefore, the defendant was no longer regarded as a neutral technology service provider but a content service provider and was held liable for the infringement.

On April 23, 2024, the Beijing Internet Court issued the first ruling in China regarding the infringement of personality rights through AI-generated voices, pointing out that the protection scope of natural persons’ voice rights can extend to AI-generated voices, with “recognizability” as the protection premise. The court determined that some defendants had infringed upon the plaintiff’s voice rights by using the plaintiff’s voice in AI without permission and sentenced the relevant defendants to apologize to the plaintiff and compensate for a total of 250,000 RMB in losses.

On June 20, 2024, the Beijing Internet Court heard and judged two cases of “AI face-swapping” software infringement. The plaintiffs in both cases were short-video models, and the defendant was the operator of a “face-swapping” APP. Without the authorization of the plaintiffs, the defendant used the plaintiffs’ video images, replaced them with the faces of others through deep-synthesis technology, and then uploaded them to the APP as face-swapping templates for users to pay for use. The court believed that the defendant’s actions did not infringe upon the plaintiffs’ portrait rights but constituted an infringement of the plaintiffs’ personal information rights. The court ruled that the defendant should apologize to the plaintiffs and compensate for mental and economic losses.

  1. Infringement of intellectual property rights

In December 2023, the Beijing Internet Court made a first-instance judgment on a copyright infringement dispute involving AI-generated paintings. This was the first copyright case related to AI-generated pictures in this field. The court believed that the AI-generated pictures involved in the case possessed the element of “originality” and embodied the original intellectual input of human beings, so they should be recognized as works and protected by copyright law.

The Guangzhou Internet Court also made a judgment on a similar case. The defendant’s website provided a paid AI-generated painting function. The plaintiff found that when asking the defendant’s website to generate images related to Ultraman, the generated Ultraman image was similar to the Ultraman image for which the plaintiff held the copyright. The final judgment clarified that the AI platform operated by the defendant had infringed upon the plaintiff’s right to reproduce and adapt the Ultraman works involved in the case during the provision of its services, and finally determined that the defendant must compensate the plaintiff for economic losses of 10,000 RMB (including reasonable expenses such as evidence collection costs).

On May 15, 2024, the “first case of AI-generated audio-visual work infringement in China” had its pre-trial proceedings at the Beijing Internet Court. In this case, the plaintiff found that the defendant had fully copied and used the trailer script, dubbing, and music of his AI-produced audio-visual work “Mountain and Sea Mirror” on other social media platforms without permission, using AI software such as GPT4 and Midjourney. The plaintiff believed that the above actions had infringed upon his rights to information network dissemination, adaptation, and attribution, constituting copyright infringement. The plaintiff demanded that the defendant immediately stop the infringing actions, apologize, eliminate the impact, and requested the defendant to compensate for economic losses and reasonable expenses totaling 500,000 RMB. The case is still under further trial.

As of now, judicial cases related to AI have mainly been focused on personality rights and intellectual property rights infringement. Among them, using others’ videos containing their portraits without permission to create video templates for paid face-swapping services may also constitute an infringement of others’ personal information rights. However, in practice, personal information rights in such cases are intricately intertwined with other personality rights, presenting a relatively complex form of rights. Most of these cases are filed on the grounds of personality rights infringement.

  1. Administrative Supervisory Practice:

Currently, in the field of administrative law enforcement, penalty cases against AI-related enterprises mainly revolve around aspects such as enterprise qualifications and consumer rights protection. There are no cases where enterprises have been penalized due to issues like personal information protection, data security, and network security.

Future Prospects

  1. Technological development and industrial application

A series of localized AI tools, such as DeepSeek, Doubao, and Kimi, have emerged continuously like mushrooms after rain. In particular, DeepSeek has brought about an innovation in algorithms. By optimizing the algorithm architecture to improve the utilization efficiency of computing power, it has challenged the traditional model that prioritizes computing power. Therefore, China’s AI is likely to witness even more rapid development, and the technologies related to AI also have broad prospects. For example, federated learning technology uses encryption technology to perform model training without exchanging raw data. In the future, it is expected to be widely applied in cross-institutional cooperation in industries such as finance and healthcare, improving model performance while ensuring data privacy and security. In the medical field, different hospitals can use federated learning to share the training results of disease models, accelerating the process of medical research and comprehensively improving the research efficiency of specific diseases.

  1. Formulation of the Artificial Intelligence Law

With the rapid development of AI technology and its extensive application in various social fields, it has become an inevitable trend to conduct comprehensive and systematic legal regulation on it. Although there was no major fundamental legislation in the field of AI in 2024, it can be seen from the introduction of a series of detailed regulations and active participation in the formulation of international rules that the relevant legislative work is advancing steadily. In the coming period, China may introduce the Artificial Intelligence Law (basic law) - this bill was included in the legislative plan as early as 2023, and was listed as a preparatory review project in the State Council’s legislative plan in 2024.

China’s AI Legslative: Balance of Efficiency and Safety

Through the approach of “legislation first, ethical guidance, and classified governance”, China has constructed one of the most systematic AI regulatory frameworks in the world. However, how to find a dynamic balance between technological innovation and risk prevention and control remains the core proposition of future legislation and practice. With the rise of a new round of domestic AI such as DeepSeek, it will promote the legislative process of China’s Artificial Intelligence Law, and China is expected to provide a “Chinese model” that balances efficiency and safety for global AI governance.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Dentons Dacheng 2025

Written by:

Dentons Dacheng
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Dentons Dacheng on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide