How can agents be endowed with the ability to comply with social norms and allow social norms to spontaneously emerge in AI societies? Recently, a team led by Professor Wang Zhen from Northwestern Polytechnical University and researcher Hu Shuyue from the Shanghai Artificial Intelligence Laboratory proposed the first normative framework for multi-agent systems based on large language models (LLMs), called CRSEC. The focus of their research is to explore the emergence of social norms in multi-agent systems based on large language models.
Keywords: Complex Systems, Social Norm Formation, Generative Agents, Large Language Models, Multi-Agent Systems
Ren Siyue (Northwestern Polytechnical University) | Author
In daily life, our morning routines, driving on the right, and casually putting on headphones at work are all guided by a series of behavioral standards (i.e., social norms) that quietly direct us. They act like “invisible navigation” in life, allowing us to instinctively know what to do at what time and in what context. However, imagine a scenario without these norms; we might feel confused in social activities, leading to various social conflicts. Over the past few decades, research on social norms has gained widespread attention in fields like complex systems science, cognitive science, and computer science. Researchers have been pursuing a core question: How do social norms spontaneously form during social interactions among humans or agents?
With the rapid development of artificial intelligence, when we integrate them into real social scenarios, the social behavior of agents needs to have certain normative qualities, meaning agents must understand what to do at what time and in what context, and act accordingly. Imagine a future society where agents need to complete various tasks assigned by humans, frequently interacting with each other and even with humans. For humans to truly accept and become accustomed to using agents for various tasks, it is crucial for agents to understand and comply with social norms. On one hand, this can reduce conflicts between agents and between agents and humans, promoting efficient collaboration; on the other hand, it allows humans to predict agents’ behavior more accurately, thereby enhancing human trust and acceptance of agents.
So, how can we enable agents to comply with social norms and allow social norms to spontaneously emerge in AI societies? Recently, a team led by Professor Wang Zhen, an academician of the European Academy of Sciences, a National Outstanding Youth, and an IEEE Fellow, in collaboration with researcher Hu Shuyue from the Shanghai Artificial Intelligence Laboratory, proposed the first normative framework for multi-agent systems based on large language models, CRSEC, focusing on the exploration of the emergence of social norms in multi-agent systems based on large language models. This paper has been accepted by the prestigious AI conference IJCAI 2024.
Paper Title: Emergence of Social Norms in Large Language Model-based Agent Societies
Authors: Ren Siyue*, Cui Zhiyao*, Song Ruiqi, Wang Zhen, Hu Shuyue
Paper Link: https://arxiv.org/pdf/2403.08251.pdf
Project Homepage: https://github.com/sxswz213/CRSEC
Research Background and Significance
With the widespread application of large language models (LLMs), generative multi-agent systems have shown credible social behaviors (e.g., inviting agents to participate in party activities), demonstrating collaborative potential that surpasses traditional methods and even solving complex tasks through collaboration (e.g., automatically generating code). However, existing research has overlooked the importance of social norms and has not addressed the emergence of social norms: they typically focus on fully cooperative task scenarios while ignoring the existence of social conflicts.
The study of the emergence of social norms has attracted significant attention in recent decades. However, past research has failed to provide direct and effective solutions to the emergence of social norms in generative agent systems. This is mainly because they have not fully leveraged the advantages of LLMs and often focus on only partial aspects of the emergence process, lacking comprehensive and systematic research. Specifically, some studies concentrate on the representation of norms (norm representation), while others focus on compliance and enforcement of norms (norm compliance and enforcement). Despite these shortcomings, past research has provided us with many insights.
We are the first to connect generative agents with the emergence of social norms, enabling generative multi-agent systems to develop social norms based on our framework. Specifically, we propose a normative architecture: generative agents can create, represent, disseminate, evaluate, integrate, and ultimately comply with norms. Social norms can emerge, effectively resolving social conflicts among generative agents.
Generative agents are agents driven by LLMs that can analyze and predict input texts (prompts) and then generate output texts, simulating human language communication and intelligent behavior.
Social norms are behavioral standards shared within social groups. If a behavioral standard is accepted by the majority of individuals in society, it evolves into a social norm. We aim to achieve the emergence of social norms through the CRSEC framework: a minority of norm advocates (agents) possess their preferred personal behavioral standards and can influence the remaining ordinary agents by actively disseminating these standards; ordinary agents can identify, evaluate, and accept corresponding behavioral standards in their social behaviors, thus complying with these standards in their actions and ultimately realizing the emergence of social norms and the disappearance of social conflicts.
The following diagram illustrates our CRSEC framework. The proposed CRSEC framework includes four key modules: Creation & Representation, Spreading, Evaluation, and Compliance. These four modules address five classic questions in the study of social norms:
-
Where do social norms come from?
-
How should we formally express social norms?
-
How are social norms disseminated through interactions among individuals?
-
How should we evaluate social norms?
-
How can we ensure that agents comply with norms in planning and action?
Diagram of the CRSEC Framework
Specifically, in the Creation & Representation module, LLM generates personal behavioral standards for each norm advocate based on their preferences. In the Spreading module, we start from two mechanisms: communication and observation: agents observe the behaviors of others and use LLM to detect whether there is a conflict with their personal behavioral standards. If a conflict exists, agents decide whether to resolve the issue through communication based on LLM’s output. Meanwhile, other agents identify potential normative information through communication and observation, leveraging LLM’s reasoning and induction capabilities, thereby achieving the dissemination of norms.
Due to the inherent limitations of LLMs, agents need to evaluate the normative information generated by LLM. In the Evaluation module, we designed an immediate evaluation process to verify that only through evaluation can it become a personal behavioral standard. Additionally, over time, each agent’s personal behavioral standards will gradually increase, but an excess of standards may limit agents’ actions. Therefore, we also introduced long-term synthesis to keep the database as streamlined as possible.
Finally, the Compliance module aims to enhance agents’ awareness of compliance with norms. We designed this module from the perspectives of planning and action: by inputting text prompts, LLM must consider personal behavioral standards when generating agents’ plans and actions, ensuring that the generated plans and actions align with their goals while adhering to norms. Additionally, agents’ compliance behaviors will influence other agents during interactions, thereby reinforcing the dissemination of norms.
This experiment is based on the Smallville sandbox game engine, primarily focusing on the “café” scenario. A total of 10 agents were set up in the generative agent society, including 3 norm advocates and 7 ordinary agents. The large language models we used in the experiment are GPT-3.5 and GPT-4.
Experimental Results and Phenomena
The following diagrams illustrate how a long-time smoker named Carlos Gomez interacts with other agents in society, from initial recognition and acceptance to ultimately complying with the norm of “no smoking indoors.” Other agents undergo a similar process of recognizing, accepting, and complying with norms within this framework, leading to the emergence of social norms.



The following diagram presents specific experimental results. We visualized the process of norm evolution from multiple perspectives and discovered several interesting phenomena.
-
Based on the CRSEC framework, social norms such as “no smoking indoors,” “keep quiet in public places,” and “tip after meals” always emerge in generative agent societies.
-
Additionally, new social norms that advocates have never promoted can also emerge in society, such as “maintaining a healthy social environment.”
-
As social norms emerge, the number of social conflicts decreases or even nearly disappears.
-
Ideas generated during communication and observations can facilitate the emergence of social norms.
-
Accepting and complying with social norms is “easier said than done” for generative agents.
-
To assess the performance of the CRSEC framework in the eyes of humans, we recruited 30 human evaluators. We randomly selected three out of five experiments, totaling 30 generative agents. Each evaluator’s task was to role-play: they read the agents’ role descriptions, watched the behavioral replay of the agents over two days, and then filled out a survey. This survey was organized by module and included multiple questions, requiring evaluators to rate the satisfaction with the agents’ LLM outputs on a 7-point Likert scale. The results are displayed below, indicating that evaluators expressed satisfaction with the agents’ behavioral performance, confirming the effectiveness of our framework.
AI + Social Science Reading Club Launch
The Wisdom Club, in collaboration with postdoctoral researcher Yang Kaicheng from Northeastern University, doctoral candidate Pei Jiaxin from the University of Michigan Ann Arbor, postdoctoral researcher Wu Yutong from the Wharton School of the University of Pennsylvania, and assistant professor Bai Xuechunzi, who is about to join the University of Chicago’s Psychology Department, jointly initiated the AI + Social Science Reading Club. Starting from March 24, every Sunday evening from 8:00 to 10:00 PM, we will explore the new ideas and values brought by large language models and generative AI to the field of computational social science.
AI + Social Science: How Large Models Reshape Social Science | Launch of the Third Season of the Computational Social Science Reading Club
Recommended Reading
1. War or Peace? Simulating World War Outbreaks Based on Large Language Models in Multi-Agent Systems
2. Review of Multi-Agent Intelligence: AI Evolution Inspired by Social Interactions
3. Data-Driven Social Networks and Multi-Agent Models Simulating Social Resilience
4. Zhang Jiang: The Foundation of Third-Generation Artificial Intelligence Technology – From Differentiable Programming to Causal Reasoning | New Course from Wisdom Academy
5. Unlocking Content at Wisdom Academy, Starting the New Year Learning Plan
6. Join Wisdom Academy and Dive into Complexity!
Click “Read the original text” to sign up for the reading club.