From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects

From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects

The research hotspot of knowledge graphs has gradually shown a tendency to emphasize quantity over structuredness, which is closely related to the prevalence of deep learning and connectionist ideas. Cognitive graphs dynamically construct knowledge graphs with contextual information based on the dual-processing theory of human cognition and perform reasoning. This article reviews the historical development of knowledge graphs, points out the motivations behind the proposal of cognitive graphs, and looks forward to their future development.

Keywords: Cognitive Graphs, Knowledge Graphs, Reasoning, Neural Symbolic Computing

Knowledge graphs were introduced by Google in 2012 as a new concept. Essentially, they are a knowledge base for the semantic web. Knowledge graphs consist of nodes and edges, where nodes represent entities and edges represent relationships between entities. This is the most intuitive and easily understandable framework for knowledge representation and reasoning, laying the foundation for modern question-answering systems. From knowledge bases and reasoning engines in the 1980s to semantic networks and ontologies in the early 21st century, the core is early versions of knowledge graphs that either focus on knowledge representation or knowledge reasoning, but have always developed slowly due to small scale and unclear application scenarios. In 2012, Google’s release of a large-scale knowledge graph with 57 billion entities completely changed this situation1; at the same time, the development of deep learning technology has also spurred a new wave of research in the field of knowledge graphs, particularly represented by knowledge graph embeddings like Trans[1] and the use of large knowledge graphs to enhance other applications such as recommendation systems and sentiment analysis.

However, while knowledge graphs have achieved success in many applications, their methodology has always been overshadowed by several “clouds of uncertainty”, such as ambiguity issues, linking difficulties, redundancy in relationships, and combinatorial explosion. Although some remedial work addressing these issues has achieved good results, truly solving these problems may require a reconsideration of the framework and methodology of knowledge representation in the era of deep learning, thus cognitive graphs[2] have emerged.

Cognitive graphs can be interpreted as “knowledge graphs constructed dynamically, based on raw text data, for specific problem contexts, using powerful machine learning models, with nodes containing contextual semantic information”. The application framework of cognitive graphs follows the dual process theory in cognitive psychology[3], where System 1 is responsible for intuitive judgments based on experience, extracting important information and dynamically constructing cognitive graphs; System 2 performs relational reasoning on the graph. Since cognitive graphs retain the implicit representation of semantic information on entity nodes, models such as graph neural networks can also play a significant role beyond symbolic logic.

Knowledge Graphs

Early Knowledge Graphs and Logical Reasoning

The theory of knowledge graphs originated in the AI boom of the second half of the 20th century, where multiple research groups independently proposed similar theories. It evolved from the semantic networks proposed by many renowned cognitive psychologists, with the most famous being ConceptNet proposed by Sowa et al. in 1984[5]. In the spirit of symbolic thought, many early knowledge graphs limited relationships to a few special basic relationships, such as “has property”, “causes”, “belongs to”, etc., and defined a series of rules for reasoning on the graph, hoping to achieve intelligence through logical reasoning.

Figure 1 shows an example from early knowledge graphs[6], where “ALI” represents “alikeness”, CAU represents “cause”; □ represents nodes, whose content is labeled by EQU “equal” or described by ALI.

From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects

However, the ideas of early knowledge graphs encountered many practical difficulties. For example, imperfect text data led to difficulties in structural parsing, the process of reducing to basic relationships required significant manual involvement, subtle differences in semantics were lost, and perfect reasoning rules could not be exhaustively enumerated. In fact, these problems did not stem from knowledge graphs themselves, but from the characteristics of symbolic thought. In subsequent practice, the balance of victory gradually tilted towards connectionist methods.

Semantic Web and Ontology

The Semantic Web[7] is a vision proposed by Tim Berners-Lee, the inventor of the World Wide Web and recipient of the ACM Turing Award, in the early 20th century when natural language understanding was far less advanced than today. Fine-grained automatic querying of the web through natural language was a luxury. Even today, advanced voice assistants (e.g., Siri) can still only handle a limited number of operations, but the rapid progress of natural language understanding technology has brought hope. At that time, Tim took a different approach; if web pages could record their own information in the form of triple tags for easier machine understanding, then semantic search would be easily achievable.

To achieve this goal, resource description framework (RDF) and resource description framework schema/ontology web language (RDFS/OWL) were designed, but due to a lack of enthusiasm among internet creators, the Semantic Web has yet to be well realized, and research on related topics has gradually been replaced by knowledge graphs. However, the structured information of triples and the technologies accumulated in ontology research have become valuable knowledge assets.

Large-Scale Knowledge Graphs

The release of Google’s knowledge graph in 2012 brought this field back into the spotlight. However, constructing a large and high-quality knowledge graph is not easy. Freebase[8] is the predecessor of Google’s knowledge graph, integrating a large number of online resources, including many private wikis. Another commonly used knowledge graph is DBpedia[9], which extracts structured knowledge from Wikipedia to construct an ontology. Through structuring, users can query using SPARQL language. YAGO[10] is also an open-source knowledge graph applied in IBM Watson’s question-answering system.

NELL[11] is a project led by Professor Tom Mitchell at Carnegie Mellon University focusing on automatic knowledge learning. The NELL project initiated a wave of machine learning for knowledge graph construction, aiming to continuously acquire resources from the web for fact discovery, rule summarization, etc., involving key technologies such as named entity recognition, disambiguation, and rule induction. Figure 2 shows the automatic continuous construction framework of the NELL system.

From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects

ArnetMiner[12] is a knowledge graph focused on the scientific field constructed by Tsinghua University’s Knowledge Engineering Laboratory, achieving key technologies such as high-precision scholar profiling, disambiguation, intelligent recommendation, and trend analysis. This work received the Test-of-Time Award from ACM SIGKDD.

Knowledge Graphs in the Era of Deep Learning

Knowledge graphs in the deep learning era have a large number of entities and relationships, but it is difficult to define logical rules across many different relationships, and “reasoning” on knowledge graphs has shifted to a paradigm of black-box model prediction. Bordes et al.[13]’s knowledge base structure embedding and Socher et al.[14]’s Neural Tensor Network (NTN) were the first to introduce neural networks into the study of knowledge graphs, especially NTN, which uses the average of word embeddings of entities and relationships in the knowledge graph as the representation of that node, training the neural network to determine whether the triple (head entity, relationship, tail entity) is true, achieving good results in knowledge graph completion (reasoning) tasks. Figure 3 shows the NTN knowledge base completion network. However, simply using word vectors to represent entities ignores their unique symbolic features: for example, the word vectors of the American internet celebrity “James Charles” and the famous fashion designer of the 20th century “Charles James” yield the same average result, yet their relevant attributes in the knowledge graph differ significantly.

From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects

Therefore, researchers have shifted their focus to training embeddings of large knowledge graphs themselves, with the most elegant and effective pioneering work being Bordes et al.[1]’s TransE. The goal of this algorithm is to learn a d-dimensional vector representation for each relationship or entity in the knowledge graph. For any triple fact (h, r, t) in the knowledge graph, the algorithm requires their vector representations to satisfy h+r ≈ t, a concept derived from similar properties of word vectors. To train such embeddings, the algorithm defines the objective function:

From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects

where d(h+r, t) is the distance between vector h+r and t, typically using Euclidean distance; h’ and r’ are samples obtained via negative sampling, and the gap loss function is ultimately used for comparative learning.

A series of subsequent works, whether in naming or framework, have continued to improve upon the style of TransE. For example, to solve one-to-many and many-to-many complex relationships, TransH[15] models relationships as planes rather than vectors, requiring the embedding vectors of head and tail entities to project onto that plane and satisfyFrom Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects, Figure 4 illustrates the vector space of the TransH model. Numerous improvement works in this direction, such as TransR[16] and TransG[17], have been well summarized in Wang et al.[18]’s review.

From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects
In the application of knowledge graphs, the most important is the knowledge graph-based question answering system (KBQA), as shown in Figure 5. Since knowledge graphs are inherently machine-friendly structures, if the corresponding SPARQL statements are available, it is easy to query the final answer from the knowledge graph. Thus, the difficulty mainly lies in how to parse natural language questions into valid queries for relationships or entities present in the knowledge graph. To address this issue, Dai et al.[19] proposed the CFO model, and Huang et al.[20] proposed KEQA, the latter predicting the embeddings of entities and seeking nearby results from knowledge graph embeddings, combining predictive models in natural language processing with knowledge graph embeddings.
From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects

Another important direction of knowledge graphs is complex problem reasoning. Knowledge graphs are often incomplete, and complex problems may not have obvious answers in the graph, or even corresponding relationships. However, through multi-step reasoning, it is still possible to obtain corresponding answers from the graph. Notable works in this direction include Xiong et al.[21]’s Deeppath (see Figure 6), which explores reasoning paths in knowledge graphs using reinforcement learning. This idea has also inspired subsequent works, such as MINERVA using policy gradient[22] and reward shaping by Lin et al.[23].

From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects

Defects of Knowledge Graphs

The defects of knowledge graphs essentially stem from the limitations of using “binary first-order predicate logic” as the knowledge representation itself. For a long time, knowledge representation has been a topic of relentless pursuit by researchers. While relying solely on (head entity, relationship, tail entity) propositions can represent most simple events or entity attributes, it struggles with complex knowledge. In today’s increasing demand for fine-grained understanding of knowledge, this flaw has become the Achilles’ heel of knowledge graphs.

For instance, the fact that “Li Zhengdao and Yang Zhenning jointly proposed the theory of parity nonconservation” would typically be recorded in a knowledge graph as (Yang Zhenning and Li Zhengdao, proposed, parity nonconservation theory), where the head entity cannot be linked to other information about the two scientists; if the head entity is split, it would fail to reflect “jointly proposed”, which is not accurate. Ultimately, “…and… jointly proposed…” is a ternary relationship that cannot be directly recorded in the knowledge graph.

If this example could be improved by introducing hyperedges into the knowledge graph, the following second-order predicate logic example completely exceeds the existing framework. “The attributes of the cloned sheep are the same as those of the original body” cannot be recorded in the knowledge graph since all attributes are characterized by different relationships; in broader knowledge base theory, this is typically classified as a “rule”, requiring extensive enumeration operations to “infer” the various relational attributes of the cloned sheep. Understanding and formalizing rules often carry a strong human touch, and a high-order logic could involve massive entities, making the graph extremely redundant.

Finally, we focus on the difficulties in entity linking caused by information reduction during the construction process of knowledge graphs. The ultimate source of knowledge is the world we inhabit, with information being continuously reduced from raw speech, images, and text to knowledge graphs, retaining only the core content. However, neglecting specific experiences of individuals in the original text can lead to difficulties in disambiguating entities with the same name.

The aforementioned issues indicate that reforming knowledge graphs is imperative. With the significant advancements in natural language processing, models like BERT[24] have enhanced text understanding and retrieval capabilities, allowing us to understand and reason directly from raw text. For example, Chen et al.[25]’s DrQA uses neural networks to extract question answers directly from text, initiating a new wave of open-domain questioning. On the other hand, we must retain the interpretability and precise stable reasoning capabilities brought by the graph structure of knowledge graphs. Therefore, cognitive graphs have emerged.

Cognitive Graphs

Cognitive graphs mainly feature three innovations corresponding to three aspects of human cognitive intelligence:

1. (Long-term memory) Directly storing indexed text data, using information retrieval algorithms to access relevant knowledge instead of explicit edges in knowledge graphs.

2. (System 1 reasoning) The graph is dynamically and multi-step constructed based on queries, with entity nodes generated through relevant entity recognition models.

3. (System 2 reasoning) The nodes in the graph simultaneously possess implicit representations with contextual information, allowing for explainable relational reasoning through models like graph neural networks.

Essentially, the improvement idea of cognitive graphs is to reduce information loss during graph construction, shifting the information processing burden to retrieval and natural language understanding algorithms while retaining the graph structure for explainable relational reasoning.

In fact, cognitive graphs are inspired by human cognitive processes, where “quickly directing attention to relevant entities” and “analyzing sentence semantics for inference” are two different thinking processes. In cognitive science, the well-known “dual process theory” posits that human cognition is divided into two systems: System 1 is an intuitive, non-conscious thinking system reliant on experience and associations; while System 2 is the unique logical reasoning ability of humans, utilizing knowledge in working memory[26] for slow yet reliable logical reasoning, being explicit and requiring conscious control, reflecting higher human intelligence.

The Challenge of Multi-Hop Reading Comprehension

Cognitive graphs were first proposed by Ding et al.[2] and applied to multi-hop open-domain reading comprehension question answering (see Figure 7). In traditional methods, open-domain question answering often relies on large-scale knowledge graphs, while reading comprehension question answering typically focuses on single passages, where natural language processing (NLP) models (like BERT) can be directly applied. The groundbreaking DrQA system applied a simple method to use reading comprehension in open-domain question answering, namely first retrieving a small number of relevant paragraphs from the corpus (Wikipedia) based on the question, and then processing these paragraphs with NLP.

However, the article on cognitive graph question answering (CogQA) argues that such methods face the “short-sighted retrieval” problem in multi-hop question answering, where the relevance of later text to the question is low, making direct retrieval difficult and leading to poor performance.

Moreover, since BERT emerged, the benchmark dataset SQuAD for single-text reading comprehension has quickly surpassed human levels, but at the same time, there is reflection on whether these models can truly achieve reading “comprehension”. In Jia et al.[27]’s article, they present an interesting example:

The original text states: “In January 1880, two of Tesla’s uncles put together enough money to help him leave Gospić for Prague where he was to study.” The question is: “What city did Tesla move to in 1880?” Most models can easily answer “Prague” correctly. However, if we add a sentence after the original text: “Tadakatsu moved to the city of Chicago in 1881”, these models will confidently answer “Chicago”—this is merely due to the sentence’s form resembling the question, even though the key information does not match.

This suggests that deep learning-based NLP models primarily resemble System 1 in cognition; if we are to further achieve robust reasoning, we must consider System 2.

Additionally, lack of interpretability has been a significant criticism of previous multi-layer black-box models. These models often only require inputting a question and text, with outputs indicating the answer’s position in the text; in multi-hop reading comprehension, each hop has a reason, and if a reasonable answer explanation cannot be provided, it cannot be proven that the text has been truly “understood”.

Cognitive Graph Question Answering

Cognitive graph question answering (see Figure 8) proposes a novel iterative framework: the algorithm uses two systems to maintain a cognitive graph, where System 1 extracts entity names related to the question from the text and expands nodes and aggregates semantic vectors, while System 2 utilizes graph neural networks for reasoning calculations on the cognitive graph.

From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects
From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects

As previously mentioned, the human System 1 is “non-conscious”, and System 1 in CogQA is also popular NLP black-box models, such as BERT. In the implementation of the article, the input of System 1 is divided into three parts: the question itself, the “clues” found from earlier paragraphs, and the Wikipedia document about a certain entity x (e.g., x = the movie “Old School”).

The goal of System 1 is to extract the “next-hop entity names” and “answer candidates” from the document. For example, in the case of Figure 7, it extracts the movies “Old School” and “Gone in 60 Seconds” as the next-hop entity names from the paragraph about “Quality Café”, and extracts its director “Todd Phillips” as one of the answer candidates from the paragraph about “Old School”. These extracted entities and answer candidates will be added as nodes to the cognitive graph. Additionally, System 1 will calculate the semantic vector of the current entity x, which will serve as the initial value for relational reasoning in System 2.

Subsequently, each extracted “next-hop entity name” or “answer candidate” will establish a new node in the cognitive graph and proceed to the next iteration.

Meanwhile, System 2 performs reasoning calculations on the cognitive graph, utilizing graph neural networks (GNN) for implicit reasoning calculations—each iteration passes transformed information from previous nodes to the next-hop nodes and updates the current implicit representation. Ultimately, the implicit representations of all “answer candidate” nodes will be assessed through a fully connected network with a softmax function to determine the final answer.

During the cognitive graph expansion process, if a visited node acquires new parent nodes (cyclic or convergent structures), it indicates that this point has gained new clue information, requiring a re-expansion calculation. The final algorithm flow is realized through a frontier point queue.

CogQA not only achieved the top performance on the HotpotQA dataset[28] over three months, but the cognitive graph in the case analysis of Figure 9 also showcases its superiority in terms of interpretability:

From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects

In the tree diagram of Figure 9(1), the key to answering the question lies in determining whether Ken Pruitt’s organization is the “Senate” or the “House of Representatives”; the model answers by assessing the semantic similarity between “upper house” (a term for the Senate in the U.S.) and “Senate”; Figure 9(2) is a directed acyclic graph, where multiple reasoning paths can enhance judgment; Figure 9(3) is an example where the model arrived at an “incorrect answer”; the standard answer from the dataset is “Ten Walls”, while CogQA provided “Marijus Adomaitis”. However, upon examining the cognitive graph, it is found that Ten Walls is merely the stage name of Marijus Adomaitis, resulting in a more accurate answer—this interpretability brings benefits that traditional black-box models lack.

Prospects for the Development of Cognitive Graphs

In today’s rapidly developing natural language understanding tools, cognitive graphs are in their infancy, and many valuable directions await exploration:

1. How can reasoning in System 2 be realized? Current methods (like graph neural networks) utilize relational edges as inductive biases but still cannot perform controlled, explainable, and robust symbolic computations. How can System 1 provide feasible preliminary work for existing neural-symbolic computation methods?

2. How should text libraries be preprocessed or pre-trained to facilitate the retrieval of relevant knowledge?

3. Are there alternative approaches? The cognitive graphs introduced in this article are based on the dual-channel theory of cognitive science; are there other supporting theories? Or should a new learning architecture combining symbolic reasoning and deep learning be constructed directly?

4. How can cognitive graphs be integrated with human memory mechanisms? Human memory mechanisms include long-term and short-term memory, but their working modes and mechanisms are unclear. Long-term memory may store a memory model, which is no longer a network of concepts but a network of computational models.

5. How cognitive graphs can be combined with external feedback is a brand new problem. Of course, feedback reinforcement learning could be considered to achieve this, but specific methods and implementation patterns need to be explored in depth.

Conclusion

In recent years, with the rise of deep learning, related technologies for knowledge graphs have also flourished, while some shortcomings have been exposed. This article reviews the historical development of knowledge graphs, summarizes their current state, and illustrates the defects of existing knowledge graphs. Meanwhile, we delve into the promising cognitive graphs, discussing their applications in question-answering systems. Finally, we look forward to potential future directions for cognitive graphs, exploring ways to facilitate the leap from perception to cognition in artificial intelligence through cognitive graphs.

Author Information

From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects

Ding Ming

CCF student member.PhD student at Tsinghua University. Main research directions include natural language processing, graph learning, and cognitive intelligence. [email protected]

From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects

Tang Jie

CCF distinguished member, former dynamic column editor of CCCF, chair of the CCF academic committee.Professor in the Department of Computer Science at Tsinghua University. Main research directions include artificial intelligence, knowledge graphs, data mining, social networks, and machine learning.

[email protected]

Footnotes

1 See https://googleblog.blogspot.com/2012/05/introducing-knowledge-graph-things-not.html.

References

[1] BORDES A, USUNIER N, GARCIA-DURAN A, et al. Translating embeddings for modeling multi-relational data [C]// Advances in Neural Information Processing Systems. 2013: 2787-2795.
[2] DING M, ZHOU C, CHEN Q, et al. Cognitive graph for multi-hop reading comprehension at scale [C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019: 2694-2703.
[3] EVANS J S B. Heuristic and analytic processes in reasoning [J]. British Journal of Psychology, 1984, 75(4): 451-468.
[4] SOWA J F. Semantic networks [J]. Encyclopedia of Cognitive Science, 2006.
[5] SOWA J F. Conceptual Structures: Information Processing in Mind and Machine [M]. Addison-Wesley. 1983.
[6] ZHANG L. Knowledge Graph Theory and Structural Parsing [M]. Twente University Press, 2002.
[7] BERNERS-LEE T, HENDLER J, LASSILA O. The semantic web [J]. Scientific american, 2001, 284(5): 34-43.
[8] BOLLACKER K, EVANS C, PARITOSH P, et al. Freebase: a collaboratively created graph database for structuring human knowledge [C]// Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data. 2008: 1247-1250.
[9] LEHMANN J, ISELE R, JAKOB M, et al. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia [J]. Semantic web, 2015, 6(2): 167-195.
[10] SUCHANEK F M, KASNECI G, WEIKUM G. Yago: a core of semantic knowledge [C]//Proceedings of the 16th international conference on World Wide Web. 2007: 697-706.
[11] CARLSON A, BETTERIDGE J, KISIEL B, et al. Toward an architecture for never-ending language learning [C]//Twenty-Fourth AAAI conference on artificial intelligence. 2010.
[12] TANG J, ZHANG J, YAO L, et al. ArnetMiner: Extraction and Mining of Academic Social Networks[C]//Proceedings of the Fourteenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’08), 2018: 990-998.
[13] BORDES A, WESTON J, COLLOBERT R, et al. Learning structured embeddings of knowledge bases [C]//Twenty-Fifth AAAI Conference on Artificial Intelligence. 2011.
[14] SOCHER R, CHEN D, MANNING C D, et al. Reasoning with neural tensor networks for knowledge base completion [C]//Advances in neural information processing systems. 2013: 926-934.
[15] WANG Z, ZHANG J, FENG J, et al. Knowledge graph embedding by translating on hyperplanes[C]// AAAI’14: Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence. AAAI Press, 2014: 1112-1119.
[16] LIN Y, LIU Z, SUN M, et al. Learning entity and relation embeddings for knowledge graph completion [C]//Twenty-ninth AAAI conference on artificial intelligence. 2015.
[17] XIAO H, HUANG M, ZHU X. Transg: A generative model for knowledge graph embedding [C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2016: 2316-2325.
[18] WANG Q, MAO Z, WANG B, et al. Knowledge graph embedding: A survey of approaches and applications [J]. IEEE Transactions on Knowledge and Data Engineering, 2017, 29(12): 2724-2743.
[19] DAI Z, LI L, XU W. CFO: Conditional focused neural question answering with large-scale knowledge bases [C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2016: 800-810.
[20] HUANG X, ZHANG J, LI D, et al. Knowledge graph embedding based question answering [C]//Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. 2019: 105-113.
[21] XIONG W, HOANG T, WANG W Y. Deeppath: A reinforcement learning method for knowledge graph reasoning [C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017: 564-573.
[22] DAS R, DHULIAWALA S, ZAHEER M, et al. Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning [C]//International Conference on Learning Representations. 2018.
[23] LIN X V, SOCHER R, XIONG C. Multi-hop knowledge graph reasoning with reward shaping [C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018: 3243-3253.
[24] DEVLIN J, CHANG M W, LEE K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding [C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 2019: 4171-4186.
[25] CHEN D, FISCH A, WESTON J, et al. Reading wikipedia to answer open-domain questions [C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2017: 1870-1879.
[26] BADDELEY A. Working memory [J]. Science, 1992, 255(5044): 556- 559.
[27] JIA R, LIANG P. Adversarial examples for evaluating reading comprehension systems [C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017: 2021-2031.
[28] YANG Z, QI P, ZHANG S, et al. Hotpotqa: A dataset for diverse, explainable multi-hop question answering [C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018: 2369-2380.

CCF Recommendations

[Excellent Articles]

  • Cognitive Graphs—The Next Gem of Artificial Intelligence

  • [Data and Graph Report Sharing] Professor Tang Jie from Tsinghua University Talks About Graph Neural Networks (GNN) and Cognitive Reasoning

  • CCF Data Graph | Professor Tang Jie from Tsinghua University: Knowledge Representation, Reasoning, and Generation in Cognitive Graphs

From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects

From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects

From Knowledge Graphs to Cognitive Graphs: History, Development, and Prospects

ClickRead Full Article, join CCF, and gain access to more CCCF articles.

Leave a Comment