
In November 2022, ChatGPT was launched, causing a global sensation. According to Similarweb, ChatGPT set a record for the fastest application to reach 100 million users on the internet, with an average of about 13 million unique visitors daily[1]. As an application of artificial intelligence algorithms, ChatGPT not only possesses the inherent attributes of traditional algorithms such as complexity and opacity, but may also introduce risks related to security and privacy protection.
01
What is ChatGPT?
Define ChatGPT
ChatGPT is a large language model designed to optimize conversations, belonging to deep learning artificial intelligence algorithms.
According to its developer OpenAI: “ChatGPT is a conversational AI (Artificial Intelligence) that can engage in continuous dialogue using natural language, answer questions, and challenge incorrect assumptions.”

In “ChatGPT,” the term GPT (Generative Pre-trained Transformer) refers to a generative pre-trained transformer model developed by OpenAI, aimed at addressing natural language processing (NLP) tasks, with the goal of enabling effective communication between humans and computers using natural language.
ChatGPT serves as a transitional version between GPT-3 and GPT-4. The reason for ChatGPT’s enhanced capabilities compared to GPT-3 and earlier versions lies in its adoption of a training method called “Reinforcement Learning from Human Feedback” (RLHF). Specifically, OpenAI initially used GPT-3 as a foundation, preparing a large number of questions for the system to automatically generate answers. For the same question, GPT-3 would generate multiple answers, which were then labeled by human annotators based on criteria such as readability, correctness, and neutrality. The results of this ranking were used to fine-tune GPT-3, ultimately iterating and upgrading to ChatGPT.
On one hand, ChatGPT forms a two-way interaction with users primarily through massive data and information delivery. ChatGPT utilizes the existing data from GPT-3 as a training set and continuously fine-tunes and improves itself through RLHF training. At the same time, ChatGPT’s selective information transmission also continually influences user behavior, creating high-frequency interactions and mutual shaping between individuals and algorithmic technology. On the other hand, the shaping of individual behavior by ChatGPT will also spread its micro-level impacts through different social network models to a more macro-level group and societal scale.
02
What Risks Might ChatGPT Bring?
Potential risk

1
ChatGPT and Data Security
OpenAI’s terms of use stipulate that OpenAI has broad usage rights over user input and output content to include it in the training database and improve ChatGPT. OpenAI states, “As part of ongoing improvements, when you use OpenAI models through our API, we may use the data you provide to us to improve the model.” “We will remove all personally identifiable information from the data intended to improve model performance.” However, OpenAI has not detailed how this mechanism will effectively operate.
ChatGPT may significantly increase the risk of data leakage, such as personal information. If users input personal information, business data, or content related to trade secrets while making requests, it will increase the risk of relevant data leakage. Since January, Microsoft and Amazon have successively prohibited their employees from sending company confidential information to ChatGPT, based on this consideration.
2
ChatGPT and Criminal Exploitation
ChatGPT may be maliciously exploited by criminals, terrorists, and other entities, introducing new criminal risks. As a large language model trained by OpenAI, ChatGPT can generate text that is indistinguishable from real text based on requests, providing technical support for cybercriminals. Additionally, tools like ChatGPT and GPT-3 can help cybercriminals convincingly simulate various social environments and psychologically manipulate victims into disclosing sensitive personal information. Furthermore, cybercriminals may use the concept of ChatGPT to conduct online fraud, similar to scams involving Bitcoin and blockchain.
3
ChatGPT and Information Security
The NewsGuard, an organization assessing and researching news credibility, tested ChatGPT and found that ChatGPT can change information and generate a large amount of sensory convincing but source-less content in just seconds[2]. The working principle of ChatGPT may lead to responses that appear highly credible on the surface but lack actual credible basis, which could greatly mislead users who lack sufficient judgment capabilities. Additionally, due to political motives or other interests, there have been precedents where internet information services combined with user profiles subtly influenced users, leading their thoughts or behavioral preferences to develop in ways that favor the influencing party. Given that ChatGPT has quickly formed a large user base in a short time, its opinion-forming attributes and social mobilization capabilities have also rapidly increased, making the potential risks to information security and even national security in the future non-negligible.
03
How Should Governance for ChatGPT and Similar Technologies Be Conducted?
Governance method
Although ChatGPT has not yet been officially opened to users in China, it has still sparked heated discussions within the country, and the possibility of its introduction cannot be entirely ruled out. Meanwhile, competitors like Baidu’s Wenxin Yiyan and Google’s Bard are thriving. In light of the known or unknown cybersecurity risks posed by ChatGPT, governance pathways should be anticipated within the existing legal framework and policy orientation in China.
1. Promoting the Application of Laws in Cybersecurity, Data Security, Personal Protection, and Anti-Fraud Fields
Regulatory departments such as public security authorities should fully play their roles in ensuring cybersecurity, data security, and personal information protection, promoting the extension of the “Cybersecurity Law,” “Data Security Law,” “Personal Information Protection Law,” and their supporting systems to the field of AI algorithms like ChatGPT, implementing systems for the management of network products and services, data security, and personal information protection. In particular, for AI companies providing conversation services similar to ChatGPT within the country, regulations on data security and the misuse of personal information should be strengthened to avoid personal information leakage and misuse caused by training models using real personal information data.
In response to the prevalence of imitation apps in app stores, the booming sale of ChatGPT accounts, and similar situations, the supporting systems for the “Anti-Telecom Network Fraud Law” should be continuously advanced, and safety assessments for fraud risks associated with the application of AI algorithms like ChatGPT should be strengthened. Different service and application scenarios should be combined to delineate the boundaries of fraud obligations for entities, dynamically improving the anti-fraud responsibility lists and task lists for the AI industry. Targeted efforts should be made on key fraud issues related to AI algorithms like ChatGPT. Starting from the black and gray industrial chains involving AI fraud, the punishment for personal information crimes at the front end of telecom fraud should be strengthened, focusing on cracking down on behaviors that provide personal information in bulk, and being vigilant against new types of scams that use ChatGPT as a gimmick.
2.Exploring the Implementation Path for Algorithm Recommendation and Deep Synthesis Technology Management Systems
AI algorithms like ChatGPT fall under the