Research Progress on Explainable Artificial Intelligence (XAI)

Research Progress on Explainable Artificial Intelligence (XAI)

Originally published by Algorithm Advancement This article studies how data collection, processing, and analysis contribute to Explainable Artificial Intelligence (XAI) from a data-centric perspective. Existing work is categorized into three types, serving to explain deep models, reveal training data insights, and provide domain knowledge insights. It distills data mining operations and DNN behavior descriptors for … Read more

Exploring RNN Interpretability Methods Proposed by Zhou Zhihua et al.

Exploring RNN Interpretability Methods Proposed by Zhou Zhihua et al.

Selected from ArXiv Authors: Bo-Jian Hou, Zhi-Hua Zhou Contributors: Si Yuan, Xiao Kun This article is authorized for reproduction by Almost Human (almosthuman2014) Reproduction is prohibited Apart from numerical calculations, do you really know what neural networks are doing internally? We have always understood deep models based on their computational flow, but we are still … Read more

Can Attention Mechanism Be Interpreted?

Can Attention Mechanism Be Interpreted?

Click the “MLNLP” above to select the “Star” public account Heavy-duty content delivered promptly Author: Gu Yuxuan, Harbin Institute of Technology SCIR References NAACL 2019 “Attention is Not Explanation” ACL 2019 “Is Attention Interpretable?” EMNLP 2019 “Attention is Not Not Explanation” This article will explore the interpretability of the attention mechanism. Introduction Since Bahdanau introduced … Read more

Is the Attention Mechanism Interpretable?

Is the Attention Mechanism Interpretable?

Author: Gu Yuxuan, Harbin Institute of Technology (SCIR) References NAACL 2019 “Attention is Not Explanation” ACL 2019 “Is Attention Interpretable?” EMNLP 2019 “Attention is Not Not Explanation” This article will explore the interpretability of the attention mechanism. Introduction Since Bahdanau introduced Attention as soft alignment in neural machine translation in 2014, a large amount of … Read more

Can Attention Mechanism Be Interpreted?

Can Attention Mechanism Be Interpreted?

Source: Harbin Institute of Technology SCIR This article is approximately 9300 words long and is recommended for a reading time of 10+ minutes. This article will discuss the interpretability of the attention mechanism. Introduction Since Bahdanau introduced Attention as soft alignment in neural machine translation in 2014, a large number of natural language processing works … Read more