Analyzing Social Concept Differentiation Under Generative AI

Analyzing Social Concept Differentiation Under Generative AI

Author Introduction

Analyzing Social Concept Differentiation Under Generative AI

Liang Yucheng, Professor and PhD Supervisor at the School of Sociology and Anthropology, Sun Yat-sen University, Distinguished Professor of the Changjiang Scholars Program.

Analyzing Social Concept Differentiation Under Generative AI

Ma Yukun (Corresponding Author), PhD student at the School of Sociology and Anthropology, Sun Yat-sen University, Email: [email protected]

Analyzing Social Concept Differentiation Under Generative AI

1. Differentiation of Social Concepts in the Digital Age: “Multicentralization”

The emergence of a digital society has brought about changes in the patterns of social connections between individuals. Any social individual can connect with another through the Internet, shifting the main arena of social concept differentiation from offline spaces to online spaces, resulting in a series of new phenomena.

Compared to offline interactions, people need to communicate with others through digital accounts on social media. These accounts constitute the digital avatars of the interactants, and all traces of interaction behavior, as well as the settings of online images, are related to the users’ psychological personalities, collectively forming a digital persona. Individuals reconstruct parts of their personalities in online spaces through online interactions, which also makes algorithmic analysis of digital personas possible.

Meanwhile, the flow of information and social interaction are closely intertwined. People update their views on specific events based on new information obtained during interactions with others. The process of social concept differentiation is the process of information spreading through social relational networks and individuals adjusting their own views. The time and cognitive costs associated with obtaining information lead individuals to only access a limited amount of information within a given timeframe, and they tend to accept information that is more similar to their own views. This results in the phenomenon of “information cocoons,” where individuals can only understand partial facts about society, forming a “multicentralized” pattern of social concepts.

Analyzing Social Concept Differentiation Under Generative AI

Thus, by projecting different conceptual subgroups into high-dimensional space through algorithms, targeted conceptual interventions can be applied. Promoters can use social robots to promote their value perspectives to different “micro-target” subgroups, thereby influencing other users. The generation and dissemination of social concepts have become a form of social engineering technical means, making social concept intervention and governance in online spaces possible.

If social concepts are allowed to develop in a multicentralized manner, different conceptual subgroups will become difficult to communicate with each other, reinforcing themselves within “information cocoons,” leading to the extremization of views. To avoid this phenomenon, it is necessary to impose “de-multicentralization” governance on social concepts. Since the 18th National Congress of the Communist Party, the country has been accelerating the construction of a strong network nation and digital China, “strengthening the construction of a whole-media communication system, shaping a new pattern of mainstream public opinion. Improving the comprehensive governance system of the Internet, promoting the formation of a good online ecological environment.” To this end, the government has taken a series of governance measures to curb the multicentralization tendency of social concept differentiation and prevent the extremization of different conceptual subgroups.

2. New Impacts of Generative AI on Social Concept Differentiation

The emergence of Large Language Models (LLMs) may change the pattern of online social concept differentiation. After OpenAI released ChatGPT at the end of November 2022, the development and application of generative AI represented by LLMs began to enter a rapid iterative evolution stage, producing increasingly broad social impacts. They can generate multi-modal content based on natural language instructions and still have a wide range of application prospects.

The outputs of LLMs are difficult to achieve pure “value neutrality.” Regarding the value shaping of LLMs, it is possible to use LLMs to output responses that align with specified views, fine-tune LLMs using text data containing specific views, and align the intrinsic values of LLMs through reinforcement learning. LLMs are trained on massive amounts of data, and their internal parameters represent the latent structures in the data, which is a highly compressed form of knowledge. For online groups active in digital spaces, using LLMs does not require special technical thresholds; the interaction process is more natural and knowledge acquisition is more convenient. However, LLMs may output incorrect information and generate hallucinations, which poses reliability issues.

Analyzing Social Concept Differentiation Under Generative AI

The process of interacting with LLMs can also subtly influence users. The professional background and technical components of LLMs make them more likely to be perceived as “trustworthy entities,” and people may adjust their views during interactions. The content generated by the model may further transform into the users’ own knowledge and be disseminated to others. If the outputs of LLMs contain biases, long-term usage may also lead individuals to internalize these biases. All these factors hinder the integration of social concepts and may further lead to differentiation and polarization of views.

To address the ideological challenges posed by LLMs, the “Interim Measures for the Management of Generative AI Services” implemented on August 15, 2023, stipulate that generative AI services must adhere to socialist values, must not generate statements that harm national security, and must prevent the production of discriminatory content throughout the model development process. Against the backdrop of the country’s “de-multicentralization” social governance, the introduction of this measure helps to unify the conceptual tendencies of LLMs, mitigate the conceptual differences between different LLM products, and avoid the possibility of LLMs generating non-mainstream ideological content. This can help maintain social consensus to a certain extent and promote the goal of social integration.

3. Exploring the Impact of LLMs on Social Concept Differentiation through ABM

(1) Research Design

To study the impact of LLMs on social concept differentiation, it is necessary to consider their popularity, the differences in conceptual types among different models, and the already formed patterns of social concept differentiation. The development of LLMs is ongoing, making it difficult to collect empirical data to empirically test theories. Therefore, this paper adopts the computational sociology method of Agent-Based Modeling (ABM) for exploratory analysis of the above issues. ABM achieves the research goal of exploring social processes and mechanisms through setting multiple self-determining agents based on specific rules in a computer and allowing them to interact in multiple rounds of iterative computation.

Analyzing Social Concept Differentiation Under Generative AI

Robert Axelrod, Professor Emeritus of Political Science at the University of Michigan, PhD from Yale University.

This paper simulates the impact process of generative AI on social concept differentiation using the cultural diffusion model proposed by Robert Axelrod. Axelrod’s cultural diffusion model reveals how different groups of agents form overall cultural differentiation through local interactions. Agents randomly exchange conceptual elements with their “neighbors” during interactions, ultimately forming different cultural subgroups at the global level. The closer the cultures of two agent groups, the easier it is for them to homogenize during interactions, leading to further convergence of their cultural representations. When agents reach complete consensus or complete disagreement, interactions cease. This paper sets LLMs as agents capable of unidirectional influence on other agent groups.

(2) Main Research Conclusions and Discussion

Overall, the intervention of LLMs in the social concept interaction field activates communication between different conceptual subgroups. If the output values of different LLMs differ too much, this activation effect will be enhanced. The smaller the conceptual differences between different LLMs, the more pronounced the “de-multicentralization” effect will be when social concepts have already formed a certain differentiation pattern during interaction. Additionally, if LLMs can provide conceptual elements different from those of conceptual subgroups, it may alleviate “multicentralization” through alignment effects.

Since the implicit values of LLMs can be shaped through value fine-tuning, multiple different LLMs with conceptual differences may weaken social integration. Against the backdrop of the country’s “de-multicentralization” social governance, the introduction of the “Interim Measures for the Management of Generative AI Services” helps to keep domestic LLMs within a controllable range of conceptual differences, thereby guiding LLM products to promote the formation of social consensus and achieve the goal of social integration to a certain extent.

Analyzing Social Concept Differentiation Under Generative AI

The conclusions of this paper indicate that if the conceptual differences of popular LLMs in society are small, they help people break out of existing “local cocoons.” However, excessive reliance due to the convenience of LLMs may lead to the formation of a “global cocoon” in society. LLMs can generate knowledge content published on social media, which has a very subtle impact on social concepts. If content generated by machines is not clearly identified, people are more likely to believe it. If this content is used as training data for LLMs, it may even affect the performance of LLMs.

For LLMs to achieve good results, efficient algorithm design is not enough; high-quality training data is also required. However, research indicates that high-quality language data may deplete within visible years. From these aspects, while LLMs may help people escape from local “information cocoons” in the short term, in the long run, as LLMs increasingly intervene in society, what may replace them is a larger “global cocoon.” This requires ongoing attention from the social sciences community.

Analyzing Social Concept Differentiation Under Generative AI
Analyzing Social Concept Differentiation Under Generative AI

[Funding Project]: Major Project of the National Social Science Fund “Research on the Heterogeneity of Urban Community Structure in China Based on Large Survey Data and Its Grassroots Governance” (Project Number: 15ZDB172).

[Citation of Original Text]: Liang Yucheng, Ma Yukun. Analyzing Social Concept Differentiation Under Generative AI: A Simulation Study Based on Agent-Based Modeling [J], Journal of Jinan University (Social Science Edition), 2024, 34(06): 120-132.

Analyzing Social Concept Differentiation Under Generative AI

Layout: Jin Pinxia

Review: Wang Wenjuan

Final Review: Yang Min

New Media Communication Matrix of the Journal of Jinan University

Analyzing Social Concept Differentiation Under Generative AI
Analyzing Social Concept Differentiation Under Generative AI
Analyzing Social Concept Differentiation Under Generative AI
Analyzing Social Concept Differentiation Under Generative AI
Analyzing Social Concept Differentiation Under Generative AI
Analyzing Social Concept Differentiation Under Generative AI

Leave a Comment