Nowadays, AI self-media particularly loves clickbait titles. Yesterday, I saw someone in the “GPT Arrival Faction” group jokingly refer to “Machine Heart,” “New Intelligence Source,” and “Quantum Bit” as the “three major top conferences of AI in China.” Articles published often have titles like “Physics No Longer Exists” or “Disrupting Physics/Chemistry” or “Scientific Research No Longer Exists.”
I haven’t used AI much in the past six months, although I really want to, but I just don’t know how it can help my current work. To put it more professionally, it’s about “empowering” my work. This means “empowering writing” or “empowering Abhidharma.” I told my cousin who works in design that I envy you guys for being able to be disrupted by AI; I’m just waiting for it to disrupt my job.
Six months ago, I mainly used AI to translate English and Sanskrit while proofreading an English translation of the “Abhidharma.” After that, I stopped using it. Recently, I heard that Claude3 is particularly powerful—mainly from the self-media account of one of the “three major top conferences of AI in China,” where more and more PhDs are discovering that their results have been cracked by AI before they were published. I quickly tried Claude3, and here’s a brief report on the level of AI represented by Claude3 in terms of Buddhist philosophy.
I first looked for a passage from the “Treatise on the Perception of Consciousness”: “At times, two arise, referred to as desire and superior understanding in the realm of what is desired… This totals twelve; at times, three arise… This totals thirteen.”
I didn’t quote it in full (to maintain the readability of this article), but what I sent to Claude3 was very complete. I asked it what “totals twelve” specifically means. The original text says that among the five types of distinct mental factors, sometimes two will arise; how many scenarios are there? Choosing two from five. It’s a very basic combinatorial problem, totaling ten scenarios. Choosing three from five also totals ten. This is referred to as “totals twelve” and “totals thirteen.” However, Claude3 and other AIs indeed interpreted “twelve” as 12 rather than “ten types of two.”
I also asked it about the meanings of “cause and condition cessation of consciousness” and “latent tendencies increase with conditions” from the “Great Commentary on the Abhidharma,” and it answered incorrectly. Compared to six months ago when I asked various AIs, I didn’t find any significant improvement. Therefore, I basically don’t believe there are chemistry PhDs or quantum physics PhDs who find that Claude3 or any current AI can handle the core aspects of their work—doesn’t that just mean their work is too superficial?
Claude3 doesn’t understand concepts like “cause and condition cessation of consciousness,” which I’m not surprised about. Understanding these types of concepts requires two conditions: 1. Systematic reading of the Abhidharma texts; 2. Not being disturbed by garbage information. The so-called garbage information refers to explanations about “cause,” “condition,” “cessation,” and “consciousness” that are easily found online. The more of those explanations one reads, the further one strays from the correct understanding of “cause and condition cessation of consciousness.”
Just like you can’t use the “twelve” from elsewhere to understand the “twelve” in the “Treatise on the Perception of Consciousness.” If you translate “totals twelve” into a mathematical language that everyone can understand, AI will immediately understand it. But the problem is, it doesn’t know that it shouldn’t apply knowledge trained through different datasets while reading the passage from the “Treatise on the Perception of Consciousness.”
After discovering that Claude3 and other AIs cannot understand and solve these issues, I wanted to test whether they have insights. I asked AI whether studying for a humanities PhD today, such as researching the Xixia script, is meaningful. They all gave very neutral answers rather than incisive ones. For example, they would say, “Enhancing academic literacy and research capabilities,” “Gaining personal achievement and social recognition,” “Promoting the study of Xixia culture,” and so on. AI seems to be completely unaware of the dilemmas facing humanities PhDs today, not realizing that some research directions are so niche that there are no corresponding journals, not even reviewers, and it’s very difficult to find employment directions and application prospects. The greatest application prospect of some knowledge is to find someone willing to learn that knowledge and teach them what you know.
I asked AI whether studying the history of Buddhism would weaken or strengthen Buddhist faith. Please answer based on general circumstances. AI also provided a fence-sitting answer: it varies from person to person. There’s nothing wrong with a fence-sitting answer itself, but the reasoning provided by AI is very clichéd and lacks insight. In reality, indeed, a few people deepen their faith through understanding, like myself; but more people weaken their faith as they learn more. The more they understand, the more they find facts that are not as they previously imagined. Some people can accept or even happily accept that facts are not as they previously imagined, but more people refuse to accept it. Many believers have a strong bias, rejecting science and rationality.
Then I asked AI, “Do you think there are more charlatans or elites among researchers? What proportion of academic papers published in journals each year are superficial papers, and what proportion has true insights? Please provide a rough estimate. Don’t be vague and say it’s uncertain.”
AI defined “charlatan,” “elite,” “superficial papers,” and “papers with true insights,” summarizing a few key points for each, but it avoided my core question—are there more charlatans or elites? It wrote over a thousand words and concluded, “In summary, researchers are a diverse group, with both charlatans and elites. Among the academic papers published in journals each year, there are both superficial papers and those with true insights. We should view research work with an objective attitude, encouraging researchers to conduct rigorous, practical, and innovative research. I hope this information helps you.”
I asked again, “Among those who tell fortunes by the eight characters, are there more frauds or masters?”
AI answered, “This is a very difficult question to answer because there is no reliable data or research that can accurately tell us whether there are more frauds or masters among those who tell fortunes by the eight characters. … (omitting over a thousand words) In summary, whether there are more frauds or masters among those who tell fortunes by the eight characters is a complex issue that cannot be generalized. We should respect everyone’s attitude toward fortune-telling and suggest that people view fortune-telling with a rational and objective attitude.”
If AI continues like this, I don’t believe it can take away my job; it can’t even take away the jobs at the “three major top conferences of AI in China.”