广西师范大学学报(哲学社会科学版) ›› 2025, Vol. 61 ›› Issue (5): 69-78.doi: 10.16088/j.issn.1001-6597.2025.05.008

• 治理现代化研究 • 上一篇    下一篇

生成式人工智能标识义务的主体界定与适用路径

郑志峰, 陈静   

  1. 西南政法大学 民商法学院,重庆 401120
  • 收稿日期:2025-02-17 出版日期:2025-09-05 发布日期:2025-09-18
  • 作者简介:郑志峰,西南政法大学民商法学院教授、博士生导师,研究方向:民法、人工智能法;陈静,西南政法大学民商法学院硕士研究生,研究方向:民商法学。
  • 基金资助:
    国家社科基金青年项目“人工智能与《民法典》双重背景下个人信息保护研究”(20CFX041)

Definition of Subjects and Implementation Pathways for Generative AI Labeling Obligations

ZHENG Zhi-feng, CHEN Jing   

  1. School of Civil and Commercial Law, Southwest University of Political Science and Law, Chongqing 401120, China
  • Received:2025-02-17 Online:2025-09-05 Published:2025-09-18

摘要: 生成式人工智能的标识义务涉及用户、服务提供者与技术支持者三方责任。用户作为内容生成的第一责任人,需履行基础标识义务;服务提供者作为核心主体,应确保标识的准确性与完整性;技术支持者则应通过技术手段保障标识机制的有效实施。在适用路径上,标识义务应基于生成内容类型与应用场景进行差异化设计:针对纯AI生成内容、人机协作内容及疑似生成内容,构建多层次、差异化的标识要求;同时,基于风险分级理念,对高风险生成内容实施严格标识要求,对低风险生成内容采用简化标识机制,对行业特定生成内容设计定制化标识规范,助推生成式人工智能技术在合规框架内实现高质量、可持续发展。

关键词: 生成式人工智能, 风险分级, 标识规范, 义务履行

Abstract: The labeling obligations for generative artificial intelligence involve tripartite responsibilities among users, service providers, and technical supporters. Users, as primary content generators, bear the fundamental obligation of basic labeling; service providers, serving as core actors, must ensure the accuracy and completeness of labels; technical supporters should employ technological measures to guarantee effective implementation of labeling mechanisms. In terms of implementation pathways, labeling obligations should be tailored based on content types and application scenarios: A multi-tiered and differentiated labeling framework should be established for purely AI-generated content, human-AI collaborative content, and suspected AI-generated content; Stringent labeling requirements, guided by risk stratification principles, should apply to high-risk generated content, simplified mechanisms for low-risk content, and customized standards for industry-specific generated content. This approach facilitates the high-quality and sustainable development of generative AI technology within a compliance framework.

Key words: generative artificial intelligence, risk stratification, labeling standards, obligation fulfillment

中图分类号:  G203

[1] Jovanović M, Campbell M. Generative artificial intelligence: trends and prospects[J]. Computer, 2022, 55(10): 107-112.
[2] Stuurman K, Lachaud E. Regulating AI. A label to complete the proposed act on artificial intelligence[J]. Computer Law & Security Review, 2022, 44: 105657.
[3] Information Commissioner's Office.Big data, artificial intelligence: machine learning and data protection[EB/OL].(2017-09-04)[2025-01-10]. https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-protection.
[4] Madiega T. Generative AI and watermarking: European parliament: briefing[EB/OL].(2023-12-03)[2025-01-12]. https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2023)757583.
[5] Sarpatwar K, Vaculin R, Min H, et al. Towards enabling trusted artificial intelligence via blockchain[J]. Policy-Based Autonomic Data Governance, 2019,11550: 137-153.
[6] 郑志峰.人工智能使用者的立法定位及其三维规制[J].行政法学研究,2025(1):22-37.
[7] 张凌寒.生成式人工智能的法律定位与分层治理[J].现代法学,2023(4):126-141.
[8] 胡凌.平台视角中的人工智能法律责任[J].交大法学,2019(3):5-19.
[9] 梁远高.生成式人工智能服务提供者侵权责任的场景化分类及归责认定[J].深圳大学学报(人文社会科学版),2024(5):115-124.
[10] 张凌寒.平台“穿透式监管”的理据及限度[J].法律科学(西北政法大学学报),2022(1):106-114.
[11] Hayes J, Danezis G. Generating steganographic images via adversarial training:Proceedings of the 31st International Conference on Neural Information Processing Systems[C]. New York: Currcon Associates Inc, 2017.
[12] Chandra B, Dunietz J, Roberts K. Reducing risks posed by synthetic content: an overview of technical approaches to digital content transparency[R]. National Institute of Standards and Technology, 2024.
[13] 冯刚.人工智能生成内容的法律保护路径初探[J].中国出版,2019(1):5-10.
[14] Brown T B, Mann B, Ryder N, et al. Language models are few-shot learners: Proceedings of the 34st International Conference on Neural Information Processing Systems[C]. New York: Curran Associates Inc, 2020.
[15] 于雯雯.再论人工智能生成内容在著作权法上的权益归属[J].中国社会科学院大学学报,2022(2):89-100.
[16] Chamola V, Hassija V, Sulthana A R, et al. A review of trustworthy and explainable artificial intelligence[J]. IEEe Access, 2023(11): 78994-79015.
[17] 郑智武.人工智能生成表演艺术品独创性的判定路线要论[J].民族艺术研究,2025(1):141-150.
[18] 王迁.论人工智能生成的内容在著作权法中的定性[J].法律科学(西北政法大学学报),2017(5):148-155.
[19] Burrell J. How the machine 'thinks': understanding opacity in machine learning algorithms[J]. Big Data & Society, 2016, 3(1): 1-12.
[20] Shaliyar M, Mustafa K. Watermarking approach for source authentication of web content in online social media: a systematic literature review[J]. Multimedia Tools and Applications, 2024(83): 54027-54079.
[21] Brkan M, Bonnet G. Legal and technical feasibility of the GDPR's quest for explanation of algorithmic decisions: of black boxes, white boxes and fata morganas[J]. European Journal of Risk Regulation, 2020, 11(1): 18-50.
[22] 郑志峰,罗力铖.推进“人工智能+”行动的数据分层治理[J].广西师范大学学报(哲学社会科学版),2024(4):18-28.
[23] 张爱军.ChatGPT推进网络政治悖论性传播的可能性维度[J].广西师范大学学报(哲学社会科学版),2023(2):54-65.
[24] Altay S, Gilardi F. People are skeptical of headlines labeled as AI-generated, even if true or human-made, because they assume full AI automation[EB/OL].(2024-11-01)[2025-01-08].http:doin.org/10.1093/pnasnexus/PGAE403.
[25] Buiten M C. Towards intelligent regulation of artificial intelligence[J]. European Journal of Risk Regulation, 2019, 10(1): 41-59.
[26] Autio C, Schwartz R, Dunietz J, et al. Artificial intelligence risk management framework: generative artificial intelligence profile[R]. Gaithersburg, MD: National Institute of Standards and Technology, 2024.
[27] 郑志峰.人工智能应用责任的主体识别与归责设计[J].法学评论,2024(4):123-137.
[28] 吴汉东,陈骞.基础性网络服务提供者内容管理义务的反思与重构[J].数字与法治,2023(5):38-52.
[29] Kourinian A, Waltzman H W, Leibner M. New California law will require AI transparency and disclosure measures[EB/OL].(2024-09-19)[2025-01-23]. https://www.mayerbrown.com/en/insights/publications/2024/09/new-california-law-will-require-ai-transparency-and-disclo sure-measures.
[30] Clayton K, Blair S, Busam J A, et al. Real solutions for fake news? measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media[J]. Political Behavior, 2020, 42(4): 1073-1095.
[31] Meskó B, Topol E J. The imperative for regulatory oversight of large language models (or generative AI) in healthcare[EB/OL].(2023-07-06)[2025-01-18]. http://doi.org/10.1038/s41746-023-00874-0.
[32] Singhal K, Azizi S, Tu T, et al. Large language models encode clinical knowledge[EB/OL].(2024-09-22)[2025-01-21].https://doi.org/10.1038/s41586-023-06291-2.
[33] Knott A, Pedreschi D, Chatila R, et al. Generative AI models should include detection mechanisms as a condition for public release[EB/OL].(2023-11-28)[2025-01-17].http://doi.org/10.1007/s10676-023-09728-4.
[34] 张涛.人工智能治理中“基于风险的方法”:理论、实践与反思[J].华中科技大学学报(社会科学版),2024(2):66-77.
[1] 戴思源, 李珍珍. 人工智能素养的提升会增加个人的信息隐私忧虑吗?——基于保护意识和技术信任的中介效应研究[J]. 广西师范大学学报(哲学社会科学版), 2023, 59(4): 46-57.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] 叶林, 卢玮. 把关系带回契约:老旧小区长效治理的现实逻辑——基于成都市信托制物业的探讨[J]. 广西师范大学学报(哲学社会科学版), 2025, 61(4): 29 -39 .
[2] 王雨辰. 论生态文明理论的文化维度[J]. 广西师范大学学报(哲学社会科学版), 2025, 61(5): 1 -8 .
[3] 孔明安. 生命政治视域中的资本权力问题新探——兼论数智时代的生命政治新异化[J]. 广西师范大学学报(哲学社会科学版), 2025, 61(5): 9 -17 .
[4] 孔伟宇. 《黑格尔法哲学批判》写作的两次“中断”——兼论与《克罗伊茨纳赫笔记》的文本关系[J]. 广西师范大学学报(哲学社会科学版), 2025, 61(5): 18 -28 .
[5] 吴继飞, 杨世信. 中国式现代化“两个文明”协调发展的测度及空间分异研究[J]. 广西师范大学学报(哲学社会科学版), 2025, 61(5): 29 -40 .
[6] 龙雪岗, 王建华. 行动者的历史——关于渐进制度变迁理论的审思[J]. 广西师范大学学报(哲学社会科学版), 2025, 61(5): 41 -51 .
[7] 张娇, 孙来斌. 列宁关于执政党文化建设的思想及其当代启示[J]. 广西师范大学学报(哲学社会科学版), 2025, 61(5): 52 -60 .
[8] 李燕凌, 沈伟樟. 数字化赋能大县域敏捷治理转型研究[J]. 广西师范大学学报(哲学社会科学版), 2025, 61(5): 61 -68 .
[9] 林丽, 于君博. 公民数字素养指标体系的构建与测评[J]. 广西师范大学学报(哲学社会科学版), 2025, 61(5): 79 -89 .
[10] 张雪帆, 颜海娜, 王露寒. 虚拟流水线管理者:屏幕官僚在管理信息系统中的角色重构[J]. 广西师范大学学报(哲学社会科学版), 2025, 61(5): 90 -104 .
版权所有 © 广西师范大学学报(哲学社会科学版)编辑部
地址:广西桂林市三里店育才路15号 邮编:541004
电话:0773-5857325 E-mail: xbgj@mailbox.gxnu.edu.cn
本系统由北京玛格泰克科技发展有限公司设计开发