Understanding and Utilizing ChatGPT Effectively

Since the launch of ChatGPT, it has gained worldwide popularity, being discussed across various fields including software engineering, data analysis, finance, insurance, consulting, marketing, media, law, healthcare, and research. The diverse and powerful functions of ChatGPT showcase the speed and level of artificial intelligence development; however, it is essential to recognize that, like other cutting-edge technological tools, ChatGPT has limitations and potential risks.

Understanding and Utilizing ChatGPT Effectively
Negative Effects of Improper Use

Yu-Che Chen, a professor at the University of Nebraska Omaha’s School of Public Administration, and Michael J. Ahn, an associate professor at the University of Massachusetts Boston’s Department of Public Policy and Public Affairs, state that ChatGPT’s strength lies in summarization and that it is not adept at providing insights or suggestions on new phenomena that lack data.

Although ChatGPT has immense potential, a heavy reliance on it may weaken human memory and critical thinking abilities regarding specific facts. People can use ChatGPT to understand complex policies and access personalized public services more efficiently; however, excessive dependence on ChatGPT could lead to the neglect of important policy information outside of its database.

ChatGPT can easily summarize the plots and famous quotes of the ten most influential works in English literature and analyze their meanings, but reading such summaries is not equivalent to reading the original works. If this simplified, standardized, and popularized “abridged version” becomes the sole option for readers, it could have various societal impacts, including changes in the way information and knowledge are exchanged and understood. In education and research, ChatGPT may exacerbate plagiarism and stifle originality, raising the questions of whether to allow or prohibit ChatGPT in exams and paper writing, and how to integrate ChatGPT into educational and research policies—issues that educators and scholars must discuss.

According to Alex C. Engler, a researcher at the Brookings Institution’s Center for Technology Innovation, many companies are eager to utilize generative artificial intelligence technologies like ChatGPT for commercial purposes, such as programming, video game environment design, voice recognition, and analysis. A key issue in the commercialization of generative AI is that developers may not have sufficient understanding and control over the functionality of the final products, technical research institutions, and the products themselves. Upstream developers may not be aware of how the original model will be used after being rewritten and integrated into larger systems, while downstream developers often overestimate the capabilities of the original model; thus, the probability of errors and unforeseen issues during collaboration may increase.

When the consequences of errors are not severe, such as in product recommendation algorithms or when there is human review, the risks may be acceptable. However, when AI commercial applications involving multiple institutions extend to far-reaching socio-economic decisions (such as educational opportunities, hiring, financial services, healthcare, etc.), policymakers must examine the risks and weigh the pros and cons. At the same time, if generative AI research institutions cannot determine the risks, they should clearly state and limit dubious applications in their terms of service. If cooperation receives approval from regulatory agencies, upstream and downstream developers should proactively share information such as operational and testing results to ensure the proper use of the original model.

Another category of risks posed by generative artificial intelligence is malicious use, such as inappropriate speech and the spread of false information, as well as cyberattacks. Such risks are not new phenomena in the digital ecosystem, but the popularity of generative artificial intelligence may exacerbate the problem of malicious AI usage.

Understanding and Utilizing ChatGPT Effectively
Avoiding the Continuation of Inequality by ChatGPT

Collin Bjork, an associate professor of science communication at Massey University in New Zealand, stated that the content generated by AI tools like ChatGPT will change the way people write, but it will not break through in terms of language and content. Currently, the content produced by tool-assisted writing is homogenized and uninteresting and may also exacerbate biases and inequalities. For example, a high school teacher in New York City reported that students disliked the learning materials generated by ChatGPT in an experiment, describing them as “biased and very dull.”

For a long time, white males using standard English have dominated fields such as journalism, law, politics, medicine, computer science, and academic research, producing far more text than other groups. Although OpenAI has not disclosed the sources of its training data, the works of white males, who represent the “standard” in English, may be the primary training corpus for large language models like ChatGPT. Of course, ChatGPT can handle multiple languages, but the issue is not what it can do, but rather its default settings. ChatGPT is pre-set with a writing paradigm at its “factory” settings, and to generate non-standard text, specific instructions must be given. This issue is also evident in DALL·E 2, ChatGPT’s “sister product,” which is another AI image generation tool developed by OpenAI. When asked to depict a “close-up of hands typing on a keyboard,” DALL·E 2 generated several images of white male hands; only after providing more prompts did it produce images of hands of different skin tones.

Some believe that automatic text generation tools like ChatGPT can help people avoid missing academic and career opportunities due to non-compliant writing. Bjork argues that people should not succumb to existing injustices. In fact, writing itself can exacerbate injustices. Alice Te Punga Somerville, a professor of English language and literature at the University of British Columbia, once stated that escaping historical and ongoing violence is a dilemma of writing. However, her advocacy is not to abandon writing but to use it critically and creatively to resist oppression. Bjork suggests that people embrace linguistic diversity and the rich rhetorical possibilities it brings, using new tools like ChatGPT to write a more equitable future.

Debora Nozza, an assistant professor of computer science at Bocconi University in Italy, stated, “Our past research has found that when natural language processing models are asked to complete a neutral sentence with a female subject, the models often use harmful language; when the subject is from a sexual minority, the use of harmful language can be as high as 87%. In this regard, ChatGPT has improved compared to previous generations, but it still generates discriminatory content if people ask the ‘right’ questions. We must find ways to fundamentally address this issue.”

Understanding and Utilizing ChatGPT Effectively
ChatGPT Is Not an Authority on Knowledge

Blayne Haggart, an associate professor of political science at Brock University in Canada, pointed out that ChatGPT, as a means of information retrieval, not only makes plagiarism, homework, and exam cheating more convenient, but another crucial issue is the authenticity and reliability of the generated information. We can ponder why certain information sources or types of knowledge are considered more credible. The credibility of journalists, scholars, and industry experts comes from their investigation of facts, provision of evidence, and possession of expertise; even if these individuals sometimes make mistakes, their professions still hold authority. While opinion pieces may not require as many citations as scientific papers, responsible authors will still indicate the sources of information and viewpoints, and these sources are verifiable by readers.

Content produced by ChatGPT can sometimes be very similar to that produced by human authors, making it difficult for people to discern the source of the content. Thus, it is understandable that they might consider it a reliable source of information, but in reality, the working principles of the two are different. ChatGPT and similar language models learn contextual relationships from vast amounts of training data, fundamentally modeling the probabilistic correlations of word sequences, predicting the probability distribution of different statements following the input statements. For example, ChatGPT follows “cows eat” with “grass” and “humans eat” with “rice,” not because it has observed these phenomena, but because “cows eat grass” and “humans eat rice” are the most probable combinations.

“For language models, words are merely words. The content produced by newer models like ChatGPT is of high quality, leading humans to believe they understand what they are writing, but the truth is they are merely generating the most probable sentences learned from the training data,” said Dirk Hovy, an associate professor of computer science at Bocconi University. Haggart emphasized that for tools like ChatGPT, “reality” is the reality of relevance; people cannot truly verify sources because their origins are statistical facts.

Heather Yang, an assistant professor of management and technology at Bocconi University, noted that people sometimes treat ChatGPT as a peer, and in a sense, this mindset is natural. Humans are social animals, which is one reason for human prosperity; interacting socially is an instinct, even if the other party is a machine. Psychological studies show that people judge the trustworthiness of a speaker based on whether they sound confident and whether their reasoning is smooth. Because ChatGPT exhibits a confident demeanor and fluent expression, people may mistakenly believe that its generated content does not require verification.

Haggart stated that from the political economy of knowledge production perspective, over-reliance on ChatGPT poses a threat to the “edifice of science” and the entire information ecosystem of society. Regardless of how coherent, fluent, and seemingly logical the content produced by ChatGPT may be, it should not be equated entirely with knowledge that has been verified by human scientific methods. Scholars and journalists should not uncritically incorporate ChatGPT-generated text into their works without clarification, as readers may be misled, confusing coherence with genuine “understanding.”

Understanding and Utilizing ChatGPT Effectively
Improving Regulation Over Worrying Blindly

Chen and Ahn also mentioned that ChatGPT may cause disruptions in the labor market, as less human labor is required to produce the same amount of information. Not only are human workers engaged in highly repetitive and predictable tasks (such as administration and customer service) at risk of being replaced, but in the long term, professions that require higher education and human intelligence, such as writing, editing, journalism, translation, legal services, and research, may also be impacted. Even in the computer industry, ChatGPT is now capable of writing code in common programming languages like Python, C++, and JavaScript, and identifying errors in the code it has written, raising questions about the future of software developers and programmers. While ChatGPT is unlikely to completely replace humans in these roles, it may require far fewer people to review, modify, and edit code written by ChatGPT or similar AI tools, leading to a noticeable decline in demand for labor.

According to Steven Pinker, a professor of psychology at Harvard University and a popular science writer, the question of “Will humans be replaced by artificial intelligence?” is not a properly framed question because there is no single measure of intelligence that encompasses all intellectual activities. People will see various AIs applicable to specific goals and contexts, rather than a single omniscient “magical algorithm.”

Gary Marcus, a retired professor of psychology and neuroscience at New York University, and Ernest Davis, a professor of computer science, observed in a series of experiments that content generated by ChatGPT may contain biases and discrimination, and may also be “fabricated” or misleading; ChatGPT cannot reason about the real physical world, cannot connect human thought processes with character traits, and cannot determine the sequence of events in a story. “ChatGPT is a probabilistic program; if this series of experiments were repeated, it might yield the same wrong answer, a different wrong answer, or the correct answer,” the two scholars said.

Pinker remarked that people have a rich imagination about superintelligence, but the existing artificial intelligence uses algorithms designed to solve specific types of problems in specific contexts. This means they may be stronger than humans in certain aspects but weaker in others, and this is likely to remain the case in the future. Additionally, humans have a stronger demand for the authenticity of intellectual products (such as literary works and news commentary), and the connection between the audience and real human authors grants these works acceptability and status.

“The fear of new technologies is always driven by the worst-case scenarios, without considering the countermeasures that might arise in the real world,”said Pinker. For large language models like ChatGPT, people may develop a stronger critical awareness, formulate relevant ethical and professional codes, and develop new technologies to identify automatically generated content. Artificial intelligence simulates human intelligence, but its operational methods, advantages, and weaknesses are not entirely the same as human intelligence; this comparison may deepen our understanding of the essence of human intelligence.

Engler stated that generative artificial intelligence like ChatGPT presents new challenges, and it is unclear what the best policies are to address them. If research institutions disclose more detailed information about their development processes and explain how they manage risks, it could contribute to policy discussions; strengthening regulation of developers of large AI models, such as requiring them to bear information-sharing responsibilities and establish risk management systems, could also help prevent and mitigate harm. Furthermore, the development of generative artificial intelligence itself creates opportunities for more effective interventions, although related research is still in its infancy. Engler noted that no single intervention is a panacea, but it is reasonable to demand that AI research and commercialization institutions bring more positive impacts to society.

Overall, Chen and Ahn stated that ChatGPT is a powerful tool that may transform how people handle information, communicate, work, and live. Providing contextualized information, understanding the intent behind users’ questions, and specifically meeting user needs are key advantages of ChatGPT compared to traditional search engines, as well as significant breakthroughs in artificial intelligence technology. OpenAI is continuing to improve ChatGPT and upgrade its underlying technology, while other AI technology research institutions are developing similar tools; meanwhile, people must pay attention to the social impacts of this new technology and guard against risks.

Sam Altman, the founder of ChatGPT, commented on his “masterpiece”: “ChatGPT has incredible limitations but excels in certain areas enough to give a remarkable misleading impression. Relying on ChatGPT for any important tasks is a mistake. It is a preview of progress; we have much work to do in terms of robustness and authenticity.” This assessment may serve as a guide for how people should appropriately view and utilize ChatGPT at this stage.

the end
Understanding and Utilizing ChatGPT Effectively

Source: China Social Science News

Image Source: CFP

Leave a Comment