Generative AI Risks and Prevention in Cyber Ideology

Generative AI Risks and Prevention in Cyber Ideology

Xiang Zheng, Associate Professor at the Marxism Institute of the Chinese Academy of Social Sciences.
In recent years, generative artificial intelligence has received widespread attention from all sectors of society. Based on a complete set of program architectures, such as algorithms, models, and rules, generative AI can perform automatic writing and real-time dialogue according to human needs, generating new texts, images, sounds, videos, codes, and other content. It can also automatically publish statements and interact through social platforms, presenting a picture of co-existence between humans and “AI programs” in cyberspace. Currently, generative AI has been found to penetrate the ideological domain, making the risks and challenges faced by the ideological field behind the computer and mobile phone screens tightly connected to people’s daily lives shift towards automation and intelligence. General Secretary Xi Jinping has emphasized: “We must strengthen the assessment and prevention of potential risks in AI development, protect the interests of the people and national security, and ensure that AI is safe, reliable, and controllable.” This requires us to be vigilant about the phenomenon of generative AI penetrating the ideological field while embracing its technology, and to be cautious and prevent the ideological risks it may bring.
Generative AI Risks and Prevention in Cyber Ideology
1. The Development of Generative AI and Its Penetration into the Ideological Field

The contemplation of whether computers possess “understanding” capabilities has been as long-standing as the development of modern technology. Alan Turing’s argument on “Can Machines Think?” has encountered philosophical rebuttals, notably John Searle’s “Chinese Room Experiment,” which argues that machines do not possess genuine “understanding” but merely perform mechanical reorganization based on “guidelines.” However, philosophical inquiries have not hindered humanity’s pursuit of artificial intelligence, especially in the past decade, where AI has developed rapidly. In 2014, the AI interactive dialogue program Eugene Goostman successfully passed the “Turing Test,” becoming the first human-computer interaction intelligent product to do so; in 2022, ChatGPT was released, capable of dialoguing by understanding and learning human language, and interacting based on the context of the conversation. In recent years, a series of generative AI products have emerged, relying on algorithms and massive data for content reorganization, marking the transition of AI into the generative AI (AIGC) phase.

As technology continues to advance, various fields in cyberspace show the marks of generative AI products, such as Microsoft’s Xiaobing, which once had a large fan base; Japan has seen the emergence of dedicated AI dialogue program platforms like twittbot; other studies suggest that 9% to 15% of accounts in the Twitter user base are considered to be fake accounts controlled by generative AI. Dialogue interaction programs based on generative AI technology frequently appear on social platforms or websites like Weibo, WeChat, QQ, Twitter, Facebook, and Reddit, with technology gradually maturing to the point of mimicking human language in cyberspace. Importantly, users can easily utilize generative AI programs to manipulate a large number of fake accounts to automatically copy user information from real accounts on the internet, stealing user network images or avatars to create fake accounts, giving these AI-controlled fake accounts verifiable backgrounds, speaking histories, and other details. They actively participate in online dialogue and comments, exhibiting a high ability to mimic human language, often making it difficult to distinguish between them. As an automated program, generative AI can “mimic human behavior to gain users’ trust, and then use it for complex activities.”

Around 2010, generative AI was found to have penetrated the ideological field. Prior to this, Oxford University had established the “Computational Propaganda Research Project,” dedicated to studying the activities and impacts of AI programs in the ideological domain. In the following years, numerous events involving generative AI programs’ involvement in the ideological field were discovered and confirmed by scholars and media. Some scholars believe that due to the openness of cyberspace, real-time information sharing, and dissemination characteristics, generative AI products have been misused for “political astroturfing,” advertising, spam, and other illegal activities. They “may reach the level of dominating public discourse and altering public opinion, guiding public attention towards information that is contrary to fact or false.” It can be said that the birth and development of generative AI is advanced and inevitable from a technical perspective, holding significant value in various fields and playing a unique role. However, both theory and reality tell us that for any new technology, we must embrace it positively while maintaining the necessary vigilance. As an emerging technology, if generative AI is improperly used, especially in the ideological field, it may lead to serious consequences and even trigger a “ripple effect,” adversely impacting various aspects of social economy, culture, and people’s lives. In the face of a new technological environment, we must recognize that the struggle for ideological discourse power in cyberspace involves both competition among people and competition between people and manipulated generative AI programs.

Generative AI Risks and Prevention in Cyber Ideology
2. The “Objectified Knowledge Power”: Understanding the Theoretical Dimension of Generative AI Activities in the Ideological Field

Understanding and exploring the activities of generative AI in the ideological field must be based on a theoretical dimension to find evidence. The classic question of whether “machines can think” provides an entry point for our research in light of the rapid development of AI technology and the presentation of generative AI exhibiting certain “thinking” characteristics. Based on this, we must first answer the question: Has generative AI today broken through the existing framework of human cognition regarding AI, achieving a new reconstruction? Are its various activities, including those in the ideological field, its “self-aware” or “spontaneous” behaviors? Does the new content generated through computation indicate that it can “think” or possesses the potential for “thinking”?

Dialectical materialism tells us that matter is primary, and any characteristics exhibited by matter are inseparable from its existence. Thus, we can conclude that the theoretical understanding of generative AI as a material existence is key to discovering its essence and is a bridge to the answers to these questions. From a material logic perspective, the “machine thinking” exhibited by generative AI through the production of new content can be summarized in physics as a set of program codes stored in storage media, large language models, and massive data that correspond with them. Program developers design program codes so that they can analyze massive data and reorganize based on it, thereby generating new content, following the logic of input → output, and automatically responding and answering in digital space. Based on the principles of generative AI’s production and operation, when we abstractly decompose it from a human perspective, we arrive at these understandings:

First, generative AI is not self-generated but designed by humans. Generative AI products are designed by developers, and although they exhibit autonomy and intelligence in form, fundamentally, they are designed and developed by their creators. This also indicates that the generation rules during the operation of generative AI are not self-generated by the machine program but are set by the designer. The principles, rules, data sample ranges for data analysis, how to filter out invalid data, and how to conduct machine learning are all specified and designed by humans, adhering to the logical operations set by the designer. Humans are undoubtedly the creators of generative AI.

Secondly, generative AI relies on input; without input, there is no output. Generative AI products follow the input → output logic, whether it is the commands of designers or manipulators, or inquiries and demands from users. The core belongs to input. After receiving input data, generative AI performs complex calculations based on algorithms (algorithm), relying on key technologies such as big data, semantic analysis, and machine learning, and then outputs. The entire process is complex on a technical level, but its simple logic can still be summarized as “input → (calculation) → output.” Therefore, generative AI does not possess the ability of “self-awareness,” and its essence of generation is fundamentally different from the essence of human conscious thought.

Furthermore, the data of generative AI fundamentally stems from human digital behavior and the digitization of human actions. Algorithms, computing power, and data are the three core elements of generative AI. If we liken algorithms and computing power to human IQ, then data can be compared to the knowledge input into the human brain from external sources. From the small samples at its inception to the trillion-level massive samples today, the data sources of generative AI can be traced back to human digital behaviors, including the data left by various human actions, conversations, creations, and comments in cyberspace, as well as feedback from human-machine interactions. Together, they form the data source for generative AI. It is evident that the new content generated by generative AI is not genuinely “new” but is a “new” reorganization based on the “old” digital traces of human activities, which differs from content reorganization involving human consciousness and cannot be regarded as “thinking” or “thinking activities.”

In summary, the designers and manipulators of generative AI are humans; its generation and output depend on humans; its massive data support is inseparable from humans. Therefore, fundamentally, it belongs to humans, developed and controlled by humans. As Marx said, “Nature has not produced any machines, nor has it produced locomotives, railways, telegraphs, automatic spinning machines, etc. They are products of human labor, organs that transform human will to control nature, or natural materials that realize human will in nature. They are organs created by human hands and brains; they are objectified knowledge power.” Thus, our understanding of generative AI cannot be divorced from human thought and be confined to a purely technical direction. Generative AI will inevitably play a role in human social life and find its roots, thus uncovering the reasons for its penetration into the ideological field.
Generative AI Risks and Prevention in Cyber Ideology
3. Manifestations of Generative AI Activities in the Ideological Field

Based on existing cases, the activities of generative AI products in the ideological field primarily unfold in the following ways:

First, a large number of statements and information containing ideological content are published to manipulate and influence social ideology. Publishing statements and information containing ideology is the primary way generative AI products are utilized for ideological penetration. Based on the needs of the controllers, it is set to densely publish or push information related to ideology at specific time windows on the basis of registering or stealing a large number of fake accounts, forming an “information storm” to compete for ideological discourse power, aiming to achieve the goal of manipulating public opinion and controlling ideology. From the content perspective, the information published by manipulators using generative AI products contains ideological elements, some of which are direct, while others are hidden. However, they can be broadly categorized into two types: one type includes ideologically biased statements and information, such as generating and publishing a large number of politically charged attitudes and viewpoints around hot political events using different linguistic expressions, or analyzing online data samples to forward politically biased comments and reports to create a discourse atmosphere. The other type consists of false information, i.e., modifying and distorting data samples to generate and publish seemingly real fake news, leveraging the uncertainty of information in an “asymmetric information field” to confuse and sensationalize historical nihilism, etc. The controllers develop and use generative AI products to inject these ideological contents in large quantities during specific time windows, forming a “pseudo-opinion climate” aimed at manipulating people’s opinions and influencing their political judgments.

Secondly, by setting or surrounding ideological topics, social mobilization is carried out. Topic setting is one of the important ways to guide public opinion. Some scholars analyze that mass media can effectively influence what facts and opinions people pay attention to and the order in which they discuss them by providing information and arranging related topics. The popularity of the internet has made it an excellent vehicle for social mobilization. With the technical support of generative AI, manipulators set ideological topics and mobilize a large number of generative AI fake accounts to hype the topic, guiding the direction of public opinion. In terms of method, social mobilization conducted by generative AI on websites and social platforms mainly unfolds through participation statements and interactive exchanges. Participation statements refer to the act of a large number of AI fake accounts making statements of participation regarding a specific ideological-related thematic activity on websites or social platforms, creating a false participation environment and exaggerating the number of participants in that thematic activity to attract more real users to participate. Interactive exchanges refer to generative AI programs simulating human language based on the needs of manipulators, distinguishing user characteristics through data algorithms, and then engaging in exchanges with real users on internet social platforms, stimulating the enthusiasm of real users, especially swing groups, to participate in ideological-related thematic activities, encouraging them to take action in real society.

Thirdly, the online popularity of political figures is hyped to enhance the influence of their viewpoints. With the widespread use of the internet and social platforms, many political figures have begun to utilize the internet, especially various online platforms, to enhance the influence of their viewpoints. They hire so-called “water armies” to publish supportive statements extensively in cyberspace. Unlike traditional “internet water armies,” there is now a trend of manipulating “machine water armies” relying on generative AI programs in the online space, which can reach extremely high quantities, with speech publication shifting from relying on individual human manipulation to one person controlling millions of automatically speaking AI fake accounts, significantly enhancing the discourse power. Unlike “zombie followers,” generative AI fake accounts can not only maintain long-term activity while following the political figures but also interact with real users, exchange views, and even engage in debates with opposing viewpoints, creating a virtual illusion of high popularity for those political figures.

Fourthly, a large amount of ineffective, irrelevant, and redundant information is posted to interfere with the dissemination of mainstream ideology. The automation level of generative AI makes it a true “information producer” in cyberspace. In addition to publishing information related to ideology, manipulators can also use generative AI programs to release large amounts of ineffective, irrelevant, and redundant information, filling gaps with unrelated content to suppress effective information, divert public attention, or cool down certain focal events, which is commonly referred to as “spamming.” The main methods include: first, interfering with the intelligent systems of search engines to affect the effective dissemination of mainstream ideological information. Generally, internet users searching for information will only choose to read within the priority display range of a few pages; in other words, the higher the information ranking, the more likely it is to be read, while originally important information may be pushed back in search results. Manipulators leverage generative AI programs to publish large amounts of irrelevant information on social platforms or news websites based on the intelligent sorting methods of search engines, making it difficult for information seekers to locate and obtain the information they need, thus interfering with and blocking people’s channels for finding political information. Secondly, irrelevant content is published in a tagged format, disrupting the thematic promotion or discussion of mainstream ideology. Thematic promotion or discussion in cyberspace often unfolds in the form of tags, such as using “#keyword#” as tags to categorize discussion topics, making it easier for people to search and discuss them online. Utilizing this setup, program designers develop generative AI products to enable it to speak based on tags, publishing large amounts of redundant information unrelated to the keywords, causing distractions when people seek information through those tags, making it difficult to concentrate on a specific topic for information retrieval and discussion.
Generative AI Risks and Prevention in Cyber Ideology
4. The Behavioral Logic of Generative AI Activities in the Ideological Field

Overall, regardless of how generative AI-related products exert influence in the ideological field, their fundamentally human nature suggests that their influence does not originate from themselves but from the designers or manipulators of these program products. Their purpose is to influence people’s ideologies, interfere with rational judgment and choices, and hope to affect people’s political life practices.

First, through inducements of interests, influence people’s judgments of political events and decisions to participate in political actions. People’s behaviors in cyberspace reflect their concepts in reality; in cyberspace, people’s attention to hot events aligns with the real world, influenced by closely related factors such as interests and concerns. More precisely, people tend to pay attention to information related to their interests and concerns, which are also closely tied to their age, gender, and geographic location, collectively determining the attention of social members to certain information and the likelihood of actions arising from that information. An important component of generative AI programs operating in the ideological field is capturing and analyzing the information of the target audience, specifically gathering the digital footprints left by the target propaganda subjects in their internet lives, especially on social media, using data algorithms to derive personalized demographic profiles of the target audience, including possible gender, educational or occupational status, age group, location, and areas of interest. By extracting the behavioral patterns of the target audience, it broadly determines their preferences, interests, and political inclinations, and engages in communication and information precision delivery through internet channels, trapping audiences in an “information cocoon” designed to align with the ideological dissemination objectives, thereby influencing internet users’ rational judgments regarding related events and their decisions to take action.

Secondly, reliance on the herd mentality strengthens psychological suggestion and weakens people’s rational judgment. Series of social psychology studies, such as the Asch experiment, indicate that the herd mentality is a common psychological tendency among individuals, especially for socially conforming individuals who are easily influenced by the surrounding crowd’s suggestions, thus abandoning their original value judgments in favor of the majority’s opinions and exhibiting herd behavior under mass action influence. Following this logic, manipulators use generative AI fake accounts to publish a large number of statements containing ideological tendencies in comment sections on social platforms or websites, creating a false atmosphere and leveraging the herd mentality’s influence on individuals, thereby weakening the audience’s rational judgment ability and amplifying the tendency to “go with the flow.” For instance, if a particular ideological statement or orientation gains substantial support on an online social platform, it is likely to lead other users to also support such statements or orientations. Under the influence of the herd mentality, most internet users may be swept up in the overall “opinion climate,” diminishing their capacity for rational judgment, leading to a homogenization of ideologies within a particular group, resulting in what is known as group polarization. Moreover, some rumors may form connections due to the spread of false data generated by generative AI products in cyberspace, providing a basis for people to believe in those rumors, thus exerting ideological influence. As some scholars have pointed out, “If you turn it into a trend, then you have turned it into a reality.”

Thirdly, constructing dominant opinions suppresses dissenting voices. Whether in cyberspace or the real world, there are generally multiple viewpoints within the opinion field regarding the same issue, with varying numbers of individuals holding different viewpoints, which gives different opinions different statuses within that field. German scholar Noelle-Neumann proposed the “spiral of silence” theory, suggesting that many social members consider the opinions of the majority when expressing their views. “Even if people clearly see that something is wrong, if voicing their thoughts would isolate them, they will remain silent; conversely, public opinion—those thoughts and behaviors that people can express openly without facing isolation—becomes a universal consensus representing good taste and moral correctness.” Although cyberspace differs from the real world, a certain “anti-spiral” phenomenon exists, where some internet users tend to express and convey their opinions through the internet. However, it is undeniable that the dominance of public opinion still exists in cyberspace, and group opinions exert pressure on individual opinions. Manipulators use generative AI to control a large number of fake accounts to comment on a particular event, causing a biased viewpoint to emerge prominently and become the mainstream opinion, constructing public discourse with dominant opinions, thereby occupying a leading position and suppressing the opinions and viewpoints of individuals or minorities using the principle of “the majority rules the minority,” thus placing dissenting opinions in a disadvantaged position and even subjecting them to attacks from other opinion groups. To avoid group sanctions, individuals tend to choose silence or align with the dominant opinion.
Generative AI Risks and Prevention in Cyber Ideology
5. Risk Prevention of Online Ideology in the Context of Generative AI

General Secretary Xi Jinping has repeatedly pointed out that in the new era, network security and information work must “prevent risks and ensure safety,” and has made important instructions on “ten adherences,” namely, “Adhere to the Party’s management of the internet, adhere to the internet for the people, adhere to the path of internet governance with Chinese characteristics, adhere to the overall development and security, adhere to the requirement that positive energy is paramount, that effective management is essential, and that good usage is true skill, adhere to building a national network security barrier, adhere to leveraging the driving role of information technology, adhere to governing the internet according to law, adhere to building a community of shared future in cyberspace, and adhere to constructing a loyal, clean, and responsible internet workforce.” The important directives of General Secretary Xi Jinping are the action guidelines for building a strong network country in the new era and new journey, as well as the fundamental principles for promoting the healthy development of generative AI, regulating its use, and effectively preventing the ideological risks it may lead to.

On the cognitive level, based on the behavioral logic of generative AI, we must strengthen risk prediction and assessment. Regarding the relationship between humans and technology, Mumford once noted that technology “is merely an element of human culture, and its good or bad effects depend on how social groups utilize it. Machines themselves do not propose any demands or guarantee anything. It is the spiritual task of humans to propose demands and guarantee their realization.” Thus, preventing the potential network ideological risks posed by generative AI cannot be confined to seeking solutions within the technological domain, nor can it remain solely within the research scope of social sciences. As a topic that crosses natural and social sciences, we need to combine social thinking and technological logic to understand and study it. This requires us to facilitate the formation of specialized risk prediction and assessment teams composed of technology experts and social science scholars to better integrate and apply multi-disciplinary expertise to analyze, predict, and assess the potential network ideological risks posed by generative AI, especially in the face of new situations and problems that continuously emerge in technological development, promptly identifying and dissecting them. “We must have proactive risk prevention strategies and effective tactics for responding to and mitigating risk challenges,” ensuring that risks are knowable, perceivable, traceable, and controllable.

On the institutional level, we must balance development and regulation, strengthen policy guidance and institutional norms, and promptly improve relevant legal systems based on technological development. Legal systems are a set of behavioral norms actively designed by humans, shaping the interaction relationships among people. As a universal social existence, institutions provide a reference basis for human social behavior, marking important guidelines for actions in specific historical and social environments. General Secretary Xi Jinping pointed out, “We must strengthen legislation in important areas such as national security, technological innovation, public health, biological safety, ecological civilization, and risk prevention, and accelerate the legislative pace in areas such as digital economy, internet finance, artificial intelligence, big data, and cloud computing, striving to improve the legal system necessary for national governance and meeting the people’s growing needs for a better life.” While generative AI develops rapidly, relevant departments in our country have jointly announced and implemented the “Interim Measures for the Management of Generative AI Services,” regulating the development and management of generative AI, specifically stating that “the provision and use of generative AI services must comply with laws and regulations, and respect social ethics and moral standards,” indicating the national requirements for the development of generative AI from an institutional perspective. However, institutional construction often has a lagging nature, and there is a certain time difference in the institutional design’s response to new emerging problems and situations, especially in today’s rapidly advancing information technology landscape; it is essential to promptly improve relevant systems based on technological development. Based on the previous analysis, the behavioral logic of generative AI products’ activities in the ideological field reveals patterns that can be explored, as they mostly unfold through large-scale means. The impact of individual, independent generative AI product applications is limited. Therefore, to prevent the improper application of generative AI products in the ideological field, we can start by controlling their scale. In our manageable and controllable internet space, we can design institutional measures to strictly regulate the entry of generative AI applications, establishing access mechanisms and rigorously managing the application processes for social media accounts and public accounts, employing various technical means to ascertain the authenticity of social media accounts; simultaneously, enhancing effective management of existing social media and public accounts, requiring generative AI products and accounts to clearly label their nature, ensuring users’ right to know.

On the technical level, we must strengthen research and enhance technical discourse power, guiding the healthy development of generative AI. The resolution of technical issues relies on technological advancement, and the enhancement of technical mastery signifies an increase in technical discourse power. For preventing potential ideological risks posed by generative AI, technological progress is an important foundation; possessing technological strength allows us to better mitigate risks. “Our country has vast data resources, enormous application demands, and profound market potential, which are unparalleled advantages for the development of artificial intelligence. The new generation of AI is driven by both data and knowledge; the more data we have, the smarter it becomes. We must fully utilize these advantages to promote the development of the new generation of artificial intelligence.” We should adeptly transform these favorable conditions into productivity, increasing support for generative AI technology, seizing new opportunities, and on this basis, enhancing research on technologies for identifying fake accounts generated by AI, constructing models and conducting analyses from the perspectives of working principles and linguistic behavior characteristics, improving the automation level of identification, analysis, and response, and effectively identifying and eliminating malicious fake accounts in a timely manner, avoiding the misuse of generative AI and creating a clear cyberspace while firmly grasping the ideological discourse power in cyberspace.

Originally published in the “Social Science Front” 2024, Issue 4, annotations omitted.

Editor: Zhang Liming

Web Editor: Chen Jiawei

Generative AI Risks and Prevention in Cyber Ideology

Leave a Comment