This article summarizes the threats posed by the abuse of AIGC to various aspects of society, highlighting difficulties such as the need for refinement of AIGC policy standards, enhancement of collaborative efforts, and strengthening of technological empowerment. It proposes countermeasures from the perspectives of improving institutional rules, enhancing technological innovation, and optimizing regulatory methods.
Generative artificial intelligence (AI Generated Content, AIGC), represented by ChatGPT, has brought significant technological benefits to human society, providing new momentum for the Fourth Industrial Revolution. However, it has also gradually exposed a series of security risks. Since the second half of 2022, the dark web has seen the emergence of a batch of malicious AI models such as WormGPT, PoisonGPT, and EvilGPT, posing new severe challenges to the security governance of AIGC, necessitating proactive governance measures.
1. Generative Artificial Intelligence Breeds New Threats from Malicious Large Models
Generative artificial intelligence has spawned a number of malicious AI large models. Malicious AI large models refer to those manipulated by illegal organizations or individuals, mimicking legitimate models like ChatGPT through open-source methods and trained on harmful corpora, specifically designed for illegal activities such as cybercrime and fraud. The misuse of AI differs from its abuse; the primary intent of these models is to engage in various illegal actions, mainly operating on the dark web, with stronger concealment and harm, creating new governance challenges of “AI + Crime,” endangering national security, industry innovation, and daily life.
(1) New Challenges to National Security
The abuse of AIGC presents new security challenges to national politics and military affairs. Firstly, in terms of ideological security, AIGC can easily be manipulated by those controlling core technologies during data feeding and algorithm training, leading to issues such as pollution and algorithmic bias. It may become a new tool for Western countries to conduct “cognitive warfare” against China. Secondly, regarding technological autonomy and control, hegemonic countries dominate the formulation of AIGC standards, and relatively weaker nations may blindly engage in large-scale product usage and follow-up development, facing the risk of “choke points” due to technological blockades and trade sanctions. Thirdly, in terms of national defense and military security, AIGC enhances the intelligent interaction level of combat personnel, weapons, and command information systems, allowing for rapid simulation and analysis of historical battles and current intelligence through advanced algorithms, improving strike precision and response speed. The U.S. military has already begun using AIGC to draft defense consultation reports in 2023 and established the Lima Task Force at the Pentagon to assess, integrate, and utilize this technology.
(2) New Impacts on Industry Application Innovation
The abuse of AIGC brings new security shocks to industry and sector development. Firstly, in education and employment, the overall content quality of various AIGC products remains uneven, leading to a phenomenon where “bad money drives out good” in creation. The use of AIGC to assist with coursework and academic research reduces necessary critical discernment and analytical processes, resulting in more false information and academic garbage. Secondly, in industrial transformation, the digital transformation process of most traditional industries remains relatively slow, and there is insufficient willingness and capability to correctly utilize AIGC to collect, process data, and apply it across R&D, production, and sales. Blindly introducing AIGC on a large scale may backfire.
(3) New Threats to Production and Lifestyle
The abuse of AIGC brings new security threats to corporate operations and personal life. Firstly, regarding corporate operational security, there are risks of data compliance, copyright infringement, and trade secret leakage. Issues such as the black box of AIGC analysis and decision-making, professional team training, and investment budgeting can also complicate the stable operation of businesses. Secondly, regarding personal usage safety, counterfeit versions of GPT continue to emerge, with a surge in the registration and theft of related domain names, frequently requesting information authorization from users, resulting in poor service quality and high risk of service interruption. AIGC can also bypass traditional security measures such as email filters and antivirus software, generating low-cost, personalized phishing software and fraudulent advertisements.
2. Difficulties in AIGC Security Governance
(1) Imbalance in Inclusive Prudence, Policy Standards Need Refinement
AIGC is a nascent phenomenon, and its development patterns are not yet fully clear, making it difficult to grasp the balance point of inclusive prudence policy standards. On one hand, current AIGC security governance policies still need to be improved to adapt to the new business models and order frameworks of the digital age, encouraging independent innovation, resource sharing, and international cooperation in new industries. On the other hand, at the specific implementation level of AIGC policies, challenges remain in market situation judgment, intervention timing decisions, and responsibility allocation, and the institutionalization of incentives for market entities to carry out safe and trustworthy technological innovation and fault tolerance measures is still lacking. There are few pilot projects for inclusive prudential regulation, and insufficient attention is paid to the assessment of the positive and negative impacts of implemented or proposed regulations, as well as the need for the establishment of a public service platform for technological ethics governance.
(2) Insufficient Collaborative Governance Capacity, Need to Enhance Joint Efforts
Artificial intelligence technology accelerates and deepens cross-departmental data sharing, process re-engineering, and business collaboration. Cross-departmental collaborative regulation still faces information and responsibility silos, and there are difficulties in AI legislation. The competitive landscape among AIGC market entities is intense, with many conflicts of interest and weak willingness to cooperate, leading to high barriers to sharing resources such as data, technology, and talent, and low willingness to participate in standard-setting, along with slow development of relevant open-source communities and technological innovation ecosystems.
(3) Regulatory Methods Lagging, Need to Strengthen Technological Empowerment
The regulatory governance system of “using technology to manage technology” is still not established. In terms of ethical safety, deepfake technologies are emerging continuously, while the generalization and robustness of detection algorithms still need improvement. In terms of algorithm safety, key technological supports such as the endogenous mechanisms of algorithm safety, risk assessment, and full lifecycle safety monitoring need further enhancement. In terms of data safety, innovation in data security monitoring and early warning technologies is required, and there is still room for improvement in cross-border regulation of digital trade. The application of new regulatory methods such as non-site, IoT perception, and penetrating monitoring is not yet fully utilized, and the level of regulatory intelligence needs to be improved.
3. Recommendations for AIGC Security Governance
(1) Improve Institutional Rules, Enhance Government-Enterprise Collaborative Governance Capacity
1. The government plays a guiding role in norms.
Firstly, increase the investigation and control, and promote education. Implement inclusive prudence and classified hierarchical regulation, monitoring and investigating illegal use and dissemination of malicious AIGC behaviors, and controlling violative accounts. Secondly, improve technical specifications and evaluation standards. Formulate national standards for AIGC pre-training and optimization training data, labeling, data classification and protection, as well as usage specifications for high-risk AI technologies such as deepfakes and intelligent group calling devices. Thirdly, improve the legal system and institutional framework. Use mechanisms like safe harbors to address content liability issues, and strengthen and improve antitrust and unfair competition law enforcement. Explore optimization of China’s data storage system, construct an active defense system for passive data outflows, and establish cross-border data flow rules and whitelist mechanisms.
2. Relevant enterprises enhance responsibility and security awareness.
Firstly, ensure content safety and trustworthiness. Use a combination of human and machine review to mitigate and remove inappropriate content, significantly label AIGC, and provide clear solutions for usage and withdrawal services. Consider using timestamp, hash verification, electronic signatures, and other technological means to provide traceability and certification for AIGC, and actively align with unified AIGC data standards or metadata standards. Secondly, improve security system processes. The capture of open data should adhere to principles of legality, legitimacy, and necessity, and sensitive information should be desensitized, with data permissions set according to different positions and levels within the enterprise. Establish a “bug bounty program” with high rewards. Thirdly, cooperate with security regulatory inspections. Strictly comply with the provisions of open-source licenses in exercising the rights of copying, modifying, and distributing open-source software. Actively report safety assessments to regulatory authorities, fulfill algorithm filing and change procedures, and enhance service transparency. For highly confidential and innovative technology-intensive industries, such as military, aerospace, and chips, careful assessments and internal and external audits should be emphasized when using AIGC.
(2) Strengthen Technological Innovation, Improve Security Governance Technical System
1. Improve the reinforcement learning mechanism of human feedback.
Focus on optimizing the reinforcement learning mechanism (RLHF) of human feedback. From training language models, collecting data, and training reward models to fine-tuning language models, reduce data costs, optimize algorithms, and improve fine-tuning strategies to prevent inappropriate content from being produced by certain strategies, aligning models with human needs. Automate the invocation of model knowledge to further reduce dependence on large-scale high-quality manual labeling data.
2. Strengthen research and application of model security technologies.
Firstly, deploy data and model security defenses. Optimize robust training algorithms to combat data poisoning, employ techniques such as truncation obfuscation and differential privacy to obscure model privacy information, and use model watermarking and fingerprinting techniques to ensure proprietary knowledge rights. Secondly, ensure the security design of various interfaces. Use identity verification, log monitoring, and gateways to enhance security management of API and web interfaces. Use VPNs and other encrypted channels for data transmission, deploy DDoS protection tools, and use sniffers to detect security issues and track data leaks. Thirdly, analyze and research new types of LLMs. Explore the use of AI tools to gradually achieve automation in countering malicious AIGC, thus using large models to “counter” large models.
(3) Optimize Regulatory Methods, Enhance Intelligent Governance and Regulatory Capacity
Firstly, improve content classification and screening mechanisms. Enhance content classification, capture of abuse patterns, human review decision-making, and user notification mechanisms, and increase support for non-English languages. Secondly, strengthen detection technologies for generated content. Enhance text detection technologies through real-time clustering and feature library matching, relying on physiological signal features, GAN image features, and data-driven approaches to strengthen deepfake detection technologies. Thirdly, expand the application of regulatory tools in various scenarios. Explore intelligent regulatory methods, such as digital sandboxes and privacy shields, in collaboration with relevant industry enterprises.
4. Conclusion
This article studies the security governance of AIGC abuse. It summarizes the threats posed by AIGC abuse to national security, industry innovation, and daily life, identifying challenges such as the need for refinement of AIGC policy standards, enhancement of collaborative efforts, and strengthening of technological empowerment. Finally, it proposes countermeasures from the perspectives of improving institutional rules, enhancing technological innovation, and optimizing regulatory methods, which have significant implications for promoting the normative application and healthy development of AIGC.
(Source: “China Informatization” 2023 Issue 11; Authors: Wang Xiaodong and Li Muzi from the National Information Center Public Technology Service Department)
Disclaimer:This content is sourced from the internet, and the copyright belongs to the original author. If there are any copyright issues regarding this reprint, please contact us for removal and processing!
(This content is sourced from Anzhixun)
Public Account:Hangzhou Network Security Association
WeChat Number:hzswlaqxh