π‘ Paper Title:
Knowledge Graph Enhanced Large Language Model Editing, ACL Findings, 2024
π Paper Link:
https://arxiv.org/abs/2402.13593
Background and Motivation
Large language models excel in various tasks due to their powerful generative capabilities and the rich knowledge they contain. However, issues such as outdated knowledge and factual errors may exist within these models, limiting their reliability in practical applications. In critical fields like medical diagnosis or legal consulting, outdated and incorrect knowledge can lead to severe consequences. Accurately and effectively updating the knowledge within large models has become an important problem that needs to be addressed. Traditional fine-tuning methods can update models but face issues like parameter corruption and catastrophic forgetting. To tackle these problems, knowledge editing tasks have emerged, aiming to precisely update specific knowledge within large models without negatively impacting unrelated knowledge or the overall performance of the model.
Despite existing research achieving some results in knowledge editing, challenges remain in capturing knowledge changes related to the target knowledge. Specifically, current research focuses on editing individual pieces of knowledge, such as modifying knowledge from (s,r,o) to (s,r,o*). However, changing a single piece of knowledge often triggers related changes in other knowledge. For example, modifying “LeBron James plays for the Miami Heat” to “LeBron James plays for the Los Angeles Lakers” necessitates updating “LeBron James works in Miami” to “LeBron James works in Los Angeles”. Existing editing methods fail to account for the impact of related knowledge changes caused by modifying target knowledge, which limits the generalized capability of the edited model. The black-box nature of large models makes it extremely complex to uncover the relationships between internal knowledge, further complicating the detection of these related knowledge changes during the editing process.
Figure 1 An example of knowledge editing in large models. A single edit may trigger changes in related knowledge.
To solve this problem, we propose a knowledge graph enhanced model editing method (GLAME, Graphs for LArge language Model Editing). GLAME introduces external knowledge graphs to capture the changes in related knowledge brought about by target knowledge updates, thereby alleviating the inability to explicitly model knowledge dependencies due to the black-box nature of large models. Additionally, we designed an editing module for graph-structured data, allowing for the editing of changing related knowledge into specific parameters of the large model, achieving collaborative editing of target knowledge and its related knowledge, breaking the limitations of existing methods that can only edit isolated knowledge and struggle to generalize.
Model Method
Figure 2 Schematic diagram of the GLAME model architecture.
-
Knowledge Graph Enhancement Module (KGA)
a. Target Knowledge Matching and Sampling: In the knowledge editing task, each editing sample contains a subject s, relation r, original object o, and a new object o*. To capture the affected knowledge, we match the most relevant entity in the external knowledge graph (like Wikidata) using o*. Then, centered on this entity, we sample its neighboring entities and their relations to obtain a subgraph containing new related relations.
b. Knowledge Representation Extraction: We extract the hidden vectors corresponding to the entities and relations in the subgraph from the shallow layers of our large model as the initial representation of nodes and edges in the subgraph, allowing for explicit modeling of dependencies between knowledge representations.
-
Graph Data Editing Module (GKE)






Experimental Results
The experimental results of each model on the CounterFact, CounterFactPlus, and MQuAKE datasets are shown in Tables 1 and 2:
Table 1 Performance metrics of each model on the CounterFact and CounterFactPlus datasets
Table 2 Performance metrics of each model on the MQuAKE dataset
Contributions of This Paper
-
Explored the importance of capturing the changes in related relationships triggered by a single edit during the knowledge editing process. By collaboratively editing target knowledge and its related knowledge, the generalized capability of the edited large model is enhanced.
-
Introduced external knowledge graphs into the knowledge editing task of large language models, utilizing the structured characteristics of knowledge graphs to explicitly relate changes in target knowledge and its related knowledge. Proposed a new knowledge editing method GLAME, achieving collaborative editing of target knowledge and its related knowledge through two key modules.
-
Demonstrated improvements in editing effectiveness and generalized capability of GLAME through experiments on multiple standard datasets.