|
广西师范大学学报(哲学社会科学版) ›› 2025, Vol. 61 ›› Issue (5): 69-78.doi: 10.16088/j.issn.1001-6597.2025.05.008
郑志峰, 陈静
ZHENG Zhi-feng, CHEN Jing
摘要: 生成式人工智能的标识义务涉及用户、服务提供者与技术支持者三方责任。用户作为内容生成的第一责任人,需履行基础标识义务;服务提供者作为核心主体,应确保标识的准确性与完整性;技术支持者则应通过技术手段保障标识机制的有效实施。在适用路径上,标识义务应基于生成内容类型与应用场景进行差异化设计:针对纯AI生成内容、人机协作内容及疑似生成内容,构建多层次、差异化的标识要求;同时,基于风险分级理念,对高风险生成内容实施严格标识要求,对低风险生成内容采用简化标识机制,对行业特定生成内容设计定制化标识规范,助推生成式人工智能技术在合规框架内实现高质量、可持续发展。
中图分类号: G203
[1] Jovanović M, Campbell M. Generative artificial intelligence: trends and prospects[J]. Computer, 2022, 55(10): 107-112. [2] Stuurman K, Lachaud E. Regulating AI. A label to complete the proposed act on artificial intelligence[J]. Computer Law & Security Review, 2022, 44: 105657. [3] Information Commissioner's Office.Big data, artificial intelligence: machine learning and data protection[EB/OL].(2017-09-04)[2025-01-10]. https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-protection. [4] Madiega T. Generative AI and watermarking: European parliament: briefing[EB/OL].(2023-12-03)[2025-01-12]. https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2023)757583. [5] Sarpatwar K, Vaculin R, Min H, et al. Towards enabling trusted artificial intelligence via blockchain[J]. Policy-Based Autonomic Data Governance, 2019,11550: 137-153. [6] 郑志峰.人工智能使用者的立法定位及其三维规制[J].行政法学研究,2025(1):22-37. [7] 张凌寒.生成式人工智能的法律定位与分层治理[J].现代法学,2023(4):126-141. [8] 胡凌.平台视角中的人工智能法律责任[J].交大法学,2019(3):5-19. [9] 梁远高.生成式人工智能服务提供者侵权责任的场景化分类及归责认定[J].深圳大学学报(人文社会科学版),2024(5):115-124. [10] 张凌寒.平台“穿透式监管”的理据及限度[J].法律科学(西北政法大学学报),2022(1):106-114. [11] Hayes J, Danezis G. Generating steganographic images via adversarial training:Proceedings of the 31st International Conference on Neural Information Processing Systems[C]. New York: Currcon Associates Inc, 2017. [12] Chandra B, Dunietz J, Roberts K. Reducing risks posed by synthetic content: an overview of technical approaches to digital content transparency[R]. National Institute of Standards and Technology, 2024. [13] 冯刚.人工智能生成内容的法律保护路径初探[J].中国出版,2019(1):5-10. [14] Brown T B, Mann B, Ryder N, et al. Language models are few-shot learners: Proceedings of the 34st International Conference on Neural Information Processing Systems[C]. New York: Curran Associates Inc, 2020. [15] 于雯雯.再论人工智能生成内容在著作权法上的权益归属[J].中国社会科学院大学学报,2022(2):89-100. [16] Chamola V, Hassija V, Sulthana A R, et al. A review of trustworthy and explainable artificial intelligence[J]. IEEe Access, 2023(11): 78994-79015. [17] 郑智武.人工智能生成表演艺术品独创性的判定路线要论[J].民族艺术研究,2025(1):141-150. [18] 王迁.论人工智能生成的内容在著作权法中的定性[J].法律科学(西北政法大学学报),2017(5):148-155. [19] Burrell J. How the machine 'thinks': understanding opacity in machine learning algorithms[J]. Big Data & Society, 2016, 3(1): 1-12. [20] Shaliyar M, Mustafa K. Watermarking approach for source authentication of web content in online social media: a systematic literature review[J]. Multimedia Tools and Applications, 2024(83): 54027-54079. [21] Brkan M, Bonnet G. Legal and technical feasibility of the GDPR's quest for explanation of algorithmic decisions: of black boxes, white boxes and fata morganas[J]. European Journal of Risk Regulation, 2020, 11(1): 18-50. [22] 郑志峰,罗力铖.推进“人工智能+”行动的数据分层治理[J].广西师范大学学报(哲学社会科学版),2024(4):18-28. [23] 张爱军.ChatGPT推进网络政治悖论性传播的可能性维度[J].广西师范大学学报(哲学社会科学版),2023(2):54-65. [24] Altay S, Gilardi F. People are skeptical of headlines labeled as AI-generated, even if true or human-made, because they assume full AI automation[EB/OL].(2024-11-01)[2025-01-08].http:doin.org/10.1093/pnasnexus/PGAE403. [25] Buiten M C. Towards intelligent regulation of artificial intelligence[J]. European Journal of Risk Regulation, 2019, 10(1): 41-59. [26] Autio C, Schwartz R, Dunietz J, et al. Artificial intelligence risk management framework: generative artificial intelligence profile[R]. Gaithersburg, MD: National Institute of Standards and Technology, 2024. [27] 郑志峰.人工智能应用责任的主体识别与归责设计[J].法学评论,2024(4):123-137. [28] 吴汉东,陈骞.基础性网络服务提供者内容管理义务的反思与重构[J].数字与法治,2023(5):38-52. [29] Kourinian A, Waltzman H W, Leibner M. New California law will require AI transparency and disclosure measures[EB/OL].(2024-09-19)[2025-01-23]. https://www.mayerbrown.com/en/insights/publications/2024/09/new-california-law-will-require-ai-transparency-and-disclo sure-measures. [30] Clayton K, Blair S, Busam J A, et al. Real solutions for fake news? measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media[J]. Political Behavior, 2020, 42(4): 1073-1095. [31] Meskó B, Topol E J. The imperative for regulatory oversight of large language models (or generative AI) in healthcare[EB/OL].(2023-07-06)[2025-01-18]. http://doi.org/10.1038/s41746-023-00874-0. [32] Singhal K, Azizi S, Tu T, et al. Large language models encode clinical knowledge[EB/OL].(2024-09-22)[2025-01-21].https://doi.org/10.1038/s41586-023-06291-2. [33] Knott A, Pedreschi D, Chatila R, et al. Generative AI models should include detection mechanisms as a condition for public release[EB/OL].(2023-11-28)[2025-01-17].http://doi.org/10.1007/s10676-023-09728-4. [34] 张涛.人工智能治理中“基于风险的方法”:理论、实践与反思[J].华中科技大学学报(社会科学版),2024(2):66-77. |
[1] | 戴思源, 李珍珍. 人工智能素养的提升会增加个人的信息隐私忧虑吗?——基于保护意识和技术信任的中介效应研究[J]. 广西师范大学学报(哲学社会科学版), 2023, 59(4): 46-57. |
|
版权所有 © 广西师范大学学报(哲学社会科学版)编辑部 地址:广西桂林市三里店育才路15号 邮编:541004 电话:0773-5857325 E-mail: xbgj@mailbox.gxnu.edu.cn 本系统由北京玛格泰克科技发展有限公司设计开发 |