Safety in the AIGC Era: Using Technology to Create a More Trustworthy World

Safety in the AIGC Era: Using Technology to Create a More Trustworthy World

Safety in the AIGC Era: Using Technology to Create a More Trustworthy World

The technology of Artificial Intelligence Generated Content (AIGC) is expected to trigger a new wave of productivity revolution, but it has also raised concerns about its safety. This article analyzes the safety issues of AIGC and proposes measures from three levels: technology, application, and regulation.

Safety in the AIGC Era: Using Technology to Create a More Trustworthy World

Keywords: Artificial Intelligence Generated Content, AI Safety

From Stable Diffusion and ChatGPT to Midjourney V5 and GPT-4, the past year has seen a whirlwind of generative artificial intelligence across the globe. Each extremely realistic generated image and every highly human-like generated text heralds the arrival of the AIGC era. While people are excited to witness the revolution in productivity, they inevitably feel concerned: the rapidly developing AIGC technology is shaking the existing social trust system. How should we coexist with AIGC and achieve a breakthrough in the AIGC era?

We Are Entering the AIGC Era

In the not-so-distant future, a typical day for an ordinary person might look like this (see Figure 1): awakened at 7:00 AM by an AI-composed alarm; at 8:00 AM, listening to AI-synthesized voice reporting AI-written news; at 10:00 AM, starting work with AI-generated creative sketches; at 3:00 PM, summarizing meeting points directly with AI, generating summaries, reports, and even PPTs; at 6:00 PM, enjoying oneself in an immersive AI-generated scene; at 10:00 PM, chatting about life with one’s dedicated AI digital companion in the quiet of the night… What we see, think, imagine, and interact with in the future may all be generated by AI.

Safety in the AIGC Era: Using Technology to Create a More Trustworthy World

Figure 1: A Day in the Life of an Ordinary Person in the AIGC Era

The rapid emergence of so many applications is supported by the fast-developing AIGC technology. As shown in Figure 2, it includes text generation technology, from the Transformer network in 2017 to the recent ChatGPT services, where text generation models have even exhibited intelligent “emergent” phenomena; it also includes image generation technology, evolving from Cycle-GAN and StyleGAN in 2017 to the recent popular diffusion models like DALL-E; in video generation, it has progressed from DeepFaceLab, initially designed for deep face synthesis, to the latest general video generation service, Gen-2. In terms of intelligence, visual generation has yet to reach the level of text generation, but the emergence of intelligence in visual modalities may not be far off.

Safety in the AIGC Era: Using Technology to Create a More Trustworthy World

Figure 2: Representative Works of AIGC Technology

In just a few years, generative artificial intelligence has gradually taken center stage. Industry research institutions have made bold predictions about the AIGC era. Gartner predicts that in seven years (by 2030), synthetic data will become the primary source of new AI training data, meaning that most of the new data around us will be machine-generated, with only a small portion being produced by humans. The “China AIGC Industry Panorama Report” predicts that the AIGC market will reach 17 billion yuan in 2023 and is expected to reach a trillion-level market by 2030. Some have bluntly stated, “In the future, whoever masters AIGC will rule the world,” and even claimed that “those who do not understand AIGC will be eliminated by the world.” While this may be an exaggeration, various signs indicate that we are entering the AIGC era.

AIGC Safety Determines the Boundaries of AIGC Applications

Although the positive effects of AIGC are widely recognized, its negative impacts have also attracted attention. The “Statement on AI Risk” signed by figures like Yoshua Bengio and Elon Musk places the risks posed by artificial intelligence to humanity alongside nuclear war—clearly, nuclear weapons should not be misused. Therefore, the emerging industries spawned by AIGC are destined to require strong regulation. It can be said that the safety of AIGC determines the boundaries of its applications.

Compared to previous technologies, AIGC presents greater safety risks. Malicious actors can use generative artificial intelligence technology to achieve harder-to-prevent forgery: identity can be faked through face generation to bypass access control; public figures’ images can be used to create pornographic videos through face swapping; expressions can be manipulated to make anyone say anything; and videos can be generated from scratch to fabricate and define so-called “truths.”

In the face of such a crisis of trust, many scholars have expressed concerns about the development of artificial intelligence. In March 2023, tens of thousands of people signed a petition calling for a pause in training AI systems stronger than GPT-4 for at least six months; Turing Award winner Geoffrey Hinton also mentioned the threat AI poses to human existence, suggesting that AI could become smarter than humans and control them; considering risks of data leaks and research integrity, large enterprises like Samsung, SoftBank, and academic institutions like Baptist University of Hong Kong have also prohibited employees and students from using ChatGPT-like services in their work and studies.

AIGC safety issues can be divided into two aspects: the first is how to regulate the misuse of AIGC applications and how to address the harm AIGC poses to society, politics, finance, and education, which is key to maintaining national and social security; the second is how to protect the results of AIGC industry applications, ensuring orderly competition in the industry and safeguarding the legitimate rights of AI-generated works (such as games, literary works, multimedia content, and digital humans), which is crucial for the industry’s survival. It can be said that the AIGC safety market is a large-scale, high-threshold blue ocean market.

To Address AIGC Safety Risks, We Must Break Through from Technology, Application, and Regulatory Levels

On the path of AIGC, there are so many challenging safety issues ahead, while in the distance lies the beautiful future people aspire to. To successfully break through in the AIGC era and address this series of potential safety risks, preparations on the levels of technology, application, and regulation are all indispensable.

Technical Level: AIGC Detection Capabilities Should Be Standardized, Using Large Models to Counter Large Models

Why should detection capabilities be standardized? We found that existing forgery detection systems achieve over 90% accuracy in detecting previously forged content, but their detection performance for images generated by large models like Midjourney is not satisfactory. This poses a challenge to the existing forgery detection technology system. This phenomenon is not only present in forgery detection tasks but also brings challenges to all existing content safety technology systems. How to respond? One feasible idea is to use large models to counter large models. This system includes data, algorithms, and applications. As shown in Figure 3, our team first built an automatic training data generation platform capable of quickly generating TB-level training data, establishing a data foundation; secondly, based on the “generate-detect-counter” model self-evolution process, we constructed a 5 billion parameter multimodal generation model foundation and a 1 billion parameter multimodal detection model foundation; finally, for industry deployment, we achieved high-precision forgery detection services across multiple industries through rapid fine-tuning, forming an industry data closed loop. Based on this large model system, the team has developed a series of detection models targeting AI-generated text, images, and videos: currently, text detection supports cross-generative model and cross-domain data detection and has developed industry-specific versions for detecting generated news and scientific literature; image detection supports detecting existing deep forgery content and content generated by new technologies like diffusion models; video detection has been scaled and deployed in real-world environments, with precision and performance ranking among the best in the country.

Safety in the AIGC Era: Using Technology to Create a More Trustworthy World

Figure 3: AIGC Detection Standardization Diagram

Application Level: Popularize AIGC Detection Tools Like AIGC Applications, Using AI to Protect AI

Developing and popularizing convenient and efficient AIGC detection tools for various needs is key to making AIGC safety governance practical. We have constructed the RuiJian AIGC Detection Tool System (see Figure 4) from three levels: national security, industry security, and personal security: for national security needs, we launched the “RuiJian Safety” forgery detection product series, including audio and video forgery detection systems for public safety, fake news detection systems for media safety, and document forgery detection systems for financial security; for industry security needs, we developed a series of detection products for industry large model-generated content, including AI-generated text detection, AI-generated image detection, and AI-generated video detection; for personal security needs, we developed the “RuiJian AI” mini-program, allowing users to check the authenticity of events, images, and videos anytime, anywhere. Popularizing AIGC detection tools will effectively enhance the public’s ability to discern truth from falsehood, improve public “safety immunity,” and reduce AIGC application safety risks.

Safety in the AIGC Era: Using Technology to Create a More Trustworthy World

Figure 4: RuiJian AIGC Detection Work System

Regulatory Level: Utilizing the Power of Laws and Regulations to Ensure the Orderly Prosperity of the AIGC Industry

Generative artificial intelligence should “have its responsibilities and restrictions.” In 2023, China implemented the “Regulations on the Management of Deep Synthesis of Internet Information Services” and the “Interim Measures for the Management of Generative Artificial Intelligence Services,” which came into effect on January 10 and August 15, respectively. Regulatory breakthroughs correspond to two major trends.

The first trend is comprehensive regulatory coverage, with phased and full-process regulations on AIGC-related elements, including: at the model level, a thorough safety assessment is required; at the application level, pre-launch filing is required; at the data level, clear labeling of generated content is necessary.

The second trend is diversification at the regulatory level, not only relying on technical safety regulations but also further enriching the AIGC regulatory system. Relevant industries will propose industry initiatives to regulate industry order based on regulatory frameworks. For example, Douyin released the “Platform Norms and Industry Initiatives for Artificial Intelligence Generated Content”; Adobe initiated the “Content Authenticity Initiative”; and German scholars proposed using an “AI Usage Record Card” to actively report the use of generated content. Furthermore, it is even possible to establish an international convention to constrain the safe development of AIGC within human control. On May 22, 2023, three co-founders of OpenAI published a signed article urging governments to consider forming an “International Atomic Energy Agency” for the AI industry, to establish global rules for the industry.

In the face of the rapid development momentum of AIGC applications, AIGC safety governance must keep pace. The academic community, industry, and regulatory agencies must work together to establish a rapid and precise AIGC safety protection system, ensuring the safety of the industry, the nation, and even human society.

Safety in the AIGC Era: Using Technology to Create a More Trustworthy World

Cao Juan

Senior Member of CCF. Researcher at the Institute of Computing Technology, Chinese Academy of Sciences. Founder of Zhongke Ruijian Technology Co., Ltd. Main research directions include digital content synthesis and forgery detection, and AI safety. [email protected]

Safety in the AIGC Era: Using Technology to Create a More Trustworthy World

Sheng Qiang

Student Member of CCF. Special Research Assistant at the Institute of Computing Technology, Chinese Academy of Sciences. Main research directions include fake news detection and large model content safety governance. [email protected]

Special Statement: The China Computer Federation (CCF) holds all copyright of the contents published in the “CCF Communications” (CCCF). Without CCF’s permission, no text or photos from this publication may be reproduced, otherwise it will be considered an infringement. CCF will pursue legal responsibility for infringing behaviors.

Safety in the AIGC Era: Using Technology to Create a More Trustworthy World

CCF Recommended

[Featured Articles]

  • Transformation: The New Era of Artificial Intelligence Brought by Large Language Models | CCCF Selected

  • CCCF Selected | Rui Yong: Opportunities and Challenges Brought by AI Large Models for Intelligent Transformation

  • CCCF Selected | Liu Yunhao: More, Different?

  • Large-Scale Pre-Trained Models

  • Liu Zhiyuan, Ding Ning: Outlook on Cutting-Edge Technologies of Large Models | CCCF Selected

Leave a Comment