Chinese Actors Deny iQIYI AI Library Authorization: A Growing Industry Storm
In a significant development shaking the entertainment landscape, Chinese Actors Deny iQIYI AI Library Authorization for their likenesses, voices, and performance data. This controversy erupted after the streaming giant iQIYI announced its new "AI Artist Library" initiative, claiming over 100 artists had joined a platform intended to facilitate AI-assisted content creation. The swift and vocal denial from several prominent actors has ignited a crucial debate about performer rights, consent, and the rapidly evolving role of artificial intelligence within the film and television industry. This growing industry storm highlights the complex ethical and legal challenges presented by AI's integration into creative fields.
- iQIYI's Vision for AI in Entertainment
- The Immediate Backlash: Chinese Actors Deny iQIYI AI Library Authorization
- iQIYI's Clarification and Ongoing Concerns
- Protecting Actor Rights in the Age of AI
- The Regulatory Landscape in China
- The Future of AI in Entertainment: Collaboration or Conflict?
- Frequently Asked Questions
- Further Reading & Resources
iQIYI's Vision for AI in Entertainment
On Monday, April 20, 2026, Chinese streaming powerhouse iQIYI publicly unveiled its "AI Artist Library" (also referred to as the "AI Talent Database") during its 2026 World Conference. The company presented this initiative with considerable fanfare, asserting that more than 100 artists had already been onboarded to the platform. The stated goal of this library was to leverage authorized multimodal data to construct digital avatars, or "digital doubles," of performers. These AI-generated likenesses and voices would then be utilized in AI-assisted film and television productions, positioning the library as a scalable and compliant solution for AIGC (AI-Generated Content) creators. For a deeper dive into the broader implications of generative AI, see our article on The Future of Generative AI in Creative Industries.
iQIYI CEO Gong Yu further elaborated on the company's ambitious vision, proclaiming that AI would "unleash creativity" in film and television. He suggested that AI could enable actors to participate in a significantly higher number of productions annually—from potentially four to as many as fourteen—while simultaneously allowing them more personal time. Gong Yu also made a striking prediction that "live-action filming may one day become intangible cultural heritage," a remark that instantly fueled public outrage and widespread discussion on social media platforms like Weibo, where the phrase "iQIYI went nuts" quickly trended. The implication that human acting could become a relic of the past, preserved like an ancient artifact, deeply concerned both actors and audiences alike.
The company's platform, Nadou Pro, was showcased as a tool where users could input prompts to generate short films and utilize it for editing. iQIYI's conceptualization of the AI Artist Library was not merely about creating digital replicas for stunts or minor roles but a broader integration of AI across production workflows, aiming to streamline content creation and potentially revolutionize the industry's efficiency. This vision sparked a fierce debate about the balance between technological advancement and the preservation of human artistry, setting the stage for the intense backlash that followed.
The Immediate Backlash: Chinese Actors Deny iQIYI AI Library Authorization
The celebratory tone of iQIYI's announcement was short-lived. Almost immediately following the unveiling, a storm of denials erupted from the studios and representatives of several prominent Chinese actors. Key figures such as Zhang Ruoyun, Wang Churan, Li Yitong, and Yu Hewei issued unequivocal statements contradicting iQIYI's claims. These statements explicitly denied that they had signed any agreements authorizing the use of their likeness, voice, or performance data for iQIYI's AI Artist Library or any AI-related purposes. Zhang Ruoyun's studio, for instance, was among the first to declare that it had "never signed any AI-related authorization" and indicated that legal action was underway to address the matter urgently.
Similar denials followed from parties associated with Wang Churan, Li Yitong, and Yu Hewei, casting significant doubt on the accuracy and transparency of iQIYI's initial claims. While iQIYI also linked other well-known names like Chen Zheyuan, Zeng Shunxi, Cheng Lei, and Jiang Long to the database, statements from their representatives also denied any such agreements. This collective repudiation from a slate of popular actors triggered a wave of online scrutiny and public concern, highlighting a stark discrepancy between the platform's assertions and the reality experienced by the artists themselves. The swift and unified response from the actors demonstrated a growing awareness and assertiveness regarding their digital rights in the face of rapidly advancing AI capabilities.
The backlash intensified as fans and netizens rallied behind the actors, expressing concerns that iQIYI's initiative amounted to a move to potentially reduce work opportunities for human actors and diminish the value of their unique artistic contributions. The controversy quickly became a trending topic, with discussions focusing on the ethical implications of using celebrity likenesses without clear, explicit, and freely given consent. This public outcry underscored the deep cultural value placed on human creativity and the fear that technology might commodify or undermine it.
iQIYI's Clarification and Ongoing Concerns
Facing mounting pressure and widespread criticism, iQIYI quickly moved to clarify its position regarding the "AI Artist Library." The company framed the initiative not as a finalized roster of contracted AI performers, but rather as a "matchmaking infrastructure". In this revised explanation, inclusion in the library would merely signify an actor's potential willingness to explore AI-driven projects, rather than a definitive authorization for their data to be used. iQIYI emphasized that any actual participation, including the format, scope, and compensation for AI-generated content, would still necessitate case-by-case negotiation, much like traditional production workflows.
Liu Wenfeng, Senior Vice President of iQIYI, further stated that the company was "not currently licensing the likeness of actors." Instead, he clarified that they were "enabling AI creators and actors to more quickly establish connections through Nadou Pro," their new AI tool for filmmakers. He insisted that actors would retain control over how their image was used in AI-generated content, asserting that every detail, "what kind of drama, which shot—everything needs to be confirmed by the actor".
Despite these clarifications, the initial damage to iQIYI's reputation was significant, and concerns persist. Legal observers and industry experts have pointed out structural risks within such models. Even with explicit consent, the reuse of an actor's likeness and performance data in AI systems raises complex questions surrounding long-term ownership, control, and rights management. The episode has become a potent symbol of the broader tensions confronting the entertainment industry as generative AI tools transition from experimental stages to mainstream production. The incident highlighted the need for not just consent, but ongoing control and clear, legally binding frameworks to protect artists' interests.
Protecting Actor Rights in the Age of AI
The dispute surrounding Chinese Actors Deny iQIYI AI Library Authorization underscores critical issues regarding legal and ethical protections for performers in an increasingly AI-driven world. The core of the controversy lies in an actor's inherent rights to their own image, voice, and artistic contributions, which are collectively known as personality rights.
Personality Rights and Data Control
In China, actors' portrait rights, voice rights, and rights related to their artistic image are protected by law. Legal experts emphasize that no individual or organization may collect, use, synthesize, or disseminate such materials without formal written authorization from the person concerned. The unauthorized use of a person's image, particularly if an AI-generated face leads the public to associate it with a specific individual, constitutes infringement of these personality rights. This extends to AI-generated voices that possess sufficient identifiability and virtual images. For more on how other nations are addressing these challenges, check out our piece on Global Perspectives on AI and Intellectual Property.
The creation of "digital doubles" or AI avatars from an actor's data inherently carries risks. Lawyers warn that once an artist's image data is used for training platform models, there are technical risks such as:
- Model fine-tuning: Subsequent adjustments to the AI model could alter the digital likeness in unintended ways, potentially leading to misrepresentation.
- Data leakage: The sensitive biometric and performance data could be compromised through cyberattacks or internal breaches, leading to widespread unauthorized use.
- Unauthorized secondary training: The AI model, once trained, could be used to generate content beyond the scope of initial consent, potentially creating content entirely outside the actor's control or in contexts they deem inappropriate.
This means an artist's digital assets could be reused or manipulated in ways they never agreed to, leading to a loss of control over their own professional image and potentially their personal reputation. The challenge lies in drafting contracts and implementing technologies that can genuinely ensure ongoing consent and control over these digital assets, moving beyond one-time permissions to continuous oversight.
Industry-Wide Implications of AI-Generated Content
The controversy has far-reaching implications for the entire entertainment industry. As generative AI tools become more sophisticated, platforms are eager to build the infrastructure for digital performers, viewing it as a path to greater efficiency and expanded content creation. However, without clear standards around consent, compensation, and governance, these efforts risk clashing with the very talent ecosystem they depend on. This tension is not unique to China, as similar debates are unfolding in Hollywood and other major entertainment hubs.
The Actors Committee of the China Federation of Radio and Television Associations, a national organization dedicated to protecting actors' legal rights, has been vocal in condemning the unauthorized use of AI technologies. They have specifically highlighted practices such as face-swapping, voice cloning, and the unauthorized use of actors' images and audio for AI model training. The committee stressed that AI-generated content linked to specific actors—even if labeled "non-commercial," "for public welfare," or "personal fan-made content"—could still constitute infringement. They urged short-video, livestreaming, and film distribution platforms to enhance content review, establish robust authorization verification mechanisms, and promptly remove infringing content.
Furthermore, the rise of AI actors presents a significant threat to employment within the industry. Critics worry that AI could displace human actors, particularly extras, voice actors, and newcomers trying to establish themselves. There are also concerns that this shift could drive down wages and make it harder for new talent to gain a foothold in the industry. The economic impact on the creative workforce is a crucial aspect of this debate, going beyond individual rights to encompass broader labor market dynamics and the future sustainability of acting as a profession.
The Regulatory Landscape in China
China's regulatory environment is actively attempting to catch up with the rapid advancements in AI technology, especially concerning personal rights and intellectual property. The timely emergence of the iQIYI controversy coincides with recent governmental actions to address these concerns, highlighting a proactive stance by Chinese authorities.
Draft Regulations on AI Copyright Infringement
On April 3, 2026, the Cyberspace Administration of China released draft regulations titled "Administrative Measures for Digital Virtual Human Information Services". These draft regulations, open for public commentary until May 6, are a direct response to increasing reports from actors, social media influencers, and ordinary citizens whose likenesses have allegedly been "stolen" for use in AI-generated short dramas. These measures aim to provide a comprehensive legal framework for the ethical development and deployment of virtual human technologies.
Key provisions of these draft measures include:
- Mandatory Consent: Companies must obtain explicit and informed consent from individuals whose images they intend to use. For minors, parental or guardian consent is a strict requirement.
- Right to Withdraw Consent: If an individual withdraws consent at any point, companies are obligated to promptly delete any related personal information used for virtual human creation, ensuring ongoing data control.
- Respect for Rights: The draft explicitly requires companies to respect the legal, portrait, and reputational rights of individuals, strictly forbidding caricatures, defamation, or any form of disparagement using virtual images.
This legislative effort signals a clear intent from Chinese authorities to regulate the burgeoning field of AI-generated content and protect individual rights. It also provides a legal framework that could empower actors and other individuals to challenge unauthorized uses of their digital likenesses more effectively, fostering a more secure digital environment for creators. To understand how Chinese regulators approach emerging technologies, read China's Stance on Emerging Tech Regulation.
Precedent from Legal Cases
Beyond the upcoming regulations, China has already seen legal precedents that affirm the protection of personality rights in the context of AI. Court cases have ruled in favor of individuals whose voices and virtual images were misused by AI technologies without consent, setting important benchmarks for future disputes. For instance, one notable case involved a voice actor whose voice was used to produce works widely circulated on apps without permission, leading to a ruling that affirmed personality rights extending to AI-generated voices with sufficient identifiability. This established that a voice, even if synthesized by AI, remains an extension of a person's identity under the law.
Another significant case saw an actress succeed in court against companies misusing her images via AI face-swapping in a short drama. The court recognized the infringement of her portrait rights, emphasizing that even seemingly minor or non-commercial misuse can have significant personal and professional repercussions. These rulings demonstrate the judiciary's firm stance on safeguarding personal rights against AI infringement and serve as a strong warning to entities seeking to exploit personal data without proper authorization. Such legal backing reinforces the position of actors and their representative bodies, giving them stronger grounds to pursue legal action against companies or individuals who violate their rights in the digital realm. The convergence of industry advocacy and emerging legislation suggests a tightening regulatory landscape for AI in entertainment, pushing for more responsible innovation.
The Future of AI in Entertainment: Collaboration or Conflict?
The controversy surrounding Chinese Actors Deny iQIYI AI Library Authorization highlights a pivotal moment for the entertainment industry globally. The rapid advancement of generative AI presents both incredible opportunities for creative innovation and significant challenges regarding ethical practices, intellectual property, and human labor. This incident serves as a crucial wake-up call, emphasizing that technological progress must be balanced with robust protections for human artists.
The iQIYI incident underscores the urgent need for clear, industry-wide standards and robust legal frameworks that define how AI can ethically and legally interact with human talent. Without such guidelines, the entertainment sector risks alienating the very artists whose creativity forms its foundation and jeopardizing the trust essential for a thriving creative ecosystem. This requires a collaborative effort involving technology developers, content creators, legal experts, and governmental bodies.
Moving forward, the conversation needs to shift towards models of collaboration that genuinely respect and compensate human creativity, rather than seeking to replace it without consent. This includes:
- Transparent Consent Mechanisms: Developing clear, easily understandable, and specific agreements for the use of an actor's likeness, voice, and performance data for AI training and content generation, ensuring artists fully comprehend the scope of authorization.
- Equitable Compensation Models: Establishing new financial frameworks that ensure actors are fairly compensated for the ongoing use of their digital likenesses, potentially including royalties or residuals for AI-generated works, akin to traditional intellectual property rights.
- Actor Control and Veto Power: Implementing mechanisms that allow actors to review, approve, or reject specific uses of their AI-generated likenesses in productions, thereby maintaining artistic control over their digital representations.
- Protection Against Misuse: Instituting strong legal recourse and technological safeguards to prevent the unauthorized alteration, deepfake creation, or use of digital doubles in inappropriate or reputation-damaging contexts.
The pushback from Chinese actors, coupled with the emerging draft regulations and judicial precedents, signals a collective determination to ensure that technological progress does not come at the expense of human dignity, rights, and livelihoods. The future of AI in entertainment will likely be shaped by how effectively stakeholders—technology companies, production studios, artists, and regulators—can navigate these complex issues to forge a path that benefits all, fostering an environment where innovation and artistic integrity coexist.
Frequently Asked Questions
Q: What is iQIYI's AI Artist Library?
A: iQIYI's AI Artist Library is an initiative by the streaming giant iQIYI to create a database of digital avatars or "digital doubles" of performers. These AI-generated likenesses, voices, and performance data are intended to be used in AI-assisted film and television productions to streamline content creation and integrate AI into production workflows.
Q: Why are Chinese actors denying authorization for iQIYI's AI library?
A: Prominent Chinese actors and their studios have publicly denied signing any agreements authorizing the use of their likeness, voice, or performance data for iQIYI's AI Artist Library. Their denials stem from concerns over their personality rights, the loss of control over their digital images, and the potential for unauthorized use or manipulation of their digital likenesses.
Q: What are the legal implications of using an actor's likeness in AI without consent in China?
A: In China, actors' portrait rights, voice rights, and rights related to their artistic image are protected by law. Unauthorized collection, use, synthesis, or dissemination of such materials for AI model training or content generation without explicit written consent constitutes an infringement of these personality rights. Recent draft regulations and court precedents reinforce these protections.