Nine Research Hotspots in Natural Language Processing

Nine Research Hotspots in Natural Language Processing

On the morning of May 23, 2020, at the “ACL-IJCAI-SIGIR Top Conference Paper Reporting Meeting (AIS 2020)” organized by the Youth Working Committee of the Chinese Chinese Information Society, and hosted by the Beijing Zhiyuan Artificial Intelligence Research Institute and Meituan-Dianping, Jia Jia, a young scientist from Zhiyuan, doctoral supervisor at the Department of Computer Science and Technology at Tsinghua University, and associate professor, gave a report titled “NLP in IJCAI 2020“.
Nine Research Hotspots in Natural Language Processing
JiaJia, a young scientist from Zhiyuan, serves as a doctoral supervisor and associate professor in the Department of Computer Science and Technology at Tsinghua University. He is the secretary-general of the Speech Dialogue and Hearing Special Committee of the Chinese Computer Federation and the secretary-general of the Speech Professional Committee of the Chinese Chinese Information Society, mainly responsible for the student committee work of the society’s youth working committee, with a primary research direction in affective computing.
IJCAI is a top international academic conference in the field of artificial intelligence. In his speech, Jia Jia introduced the main achievements and research trends in natural language processing based on the accepted papers from IJCAI 2020, discussing nine aspects from the dimensions of algorithms and tasks, including unsupervised pre-training, cross-language learning, meta-learning and few-shot learning, transfer learning, error, knowledge fusion, question answering, natural language generation, and multi-modality.
Below are the highlights of Jia Jia’s speech.

Organized by: Zhiyuan Community Luo Li

NLP Hotspots in IJCAI 2020 Word Cloud

There are over 80 papers related to natural language processing in IJCAI 2020. By performing word cloud analysis on the keywords, we can see that deep learning still dominates natural language processing.
Nine Research Hotspots in Natural Language ProcessingFigure 1: Number of NLP papers in IJCAI over the years and keyword “word cloud” analysis
In addition to deep learning, the word cloud also contains other research hotspots from 2020, mainly summarized in the following four aspects:
(1) Generative tasks, such as dialogue generation and paragraph generation.
(2) Network structure design, where researchers prefer to use Attention in network structure design.
(3) Entity relation extraction and entity recognition, which were widely researched in this year’s IJCAI.
(4) Designing model frameworks that combine knowledge and neural networks, where more and more researchers focus on designing their model frameworks by utilizing the combination of knowledge and neural networks.
Next, Jia Jia summarized the NLP-related research in IJCAI 2020 from two dimensions (algorithmic level and task level) and nine aspects.
Nine Research Hotspots in Natural Language ProcessingFigure 2: Nine highlights in IJCAI NLP research

Algorithmic Level Research Summary in NLP

1. Unsupervised Pre-training

Pre-training language models have always been a research hotspot in the NLP field, significantly enhancing the performance of various NLP tasks.
Nine Research Hotspots in Natural Language ProcessingFigure 3: General Language Models Related to BERT
Figure 3 shows a series of general language models related to BERT after its emergence. In IJCAI 2020, related work also focused on the pre-training of language models. These pre-trained language models include both general pre-trained models, such as EViLBERT model[1], AdaBERT model[2], and specific task pre-trained models, such as BERT-INT model[3], BERT-PLI model[4], and FinBERT model[5].
EViLBERT model is a language model pre-trained through multi-modal pre-training, achieving good results by eliminating image captioning; AdaBERT model solves the long time consumption and large parameter issue of BERT by using neural architecture search for parameter compression; BERT-INT addresses the alignment issue of knowledge graphs; BERT-PLI addresses the problem of legal text retrieval; FinBERT addresses the issue of financial text mining.
The emergence of BERT has greatly promoted the development of the NLP field. Jia Jia speculates that research related to BERT in NLP will mainly focus on two aspects in the coming years:
(1) How to accelerate the training process of unsupervised language models;
(2) How to find better network structures by reducing time overhead.

2. Cross-Language Learning

In recent years, there has been increasing attention to research on cross-language learning in the NLP field, driven by significant practical demands. IJCAI 2020 also addressed how to solve cross-language problems, which is significant as it can promote cultural exchange and, more importantly, greatly facilitate the deployment of NLP technologies in many non-English scenarios, including examples of word embedding, unsupervised models, machine translation, etc.
Nine Research Hotspots in Natural Language ProcessingFigure 4: Example of Cross-Language Learning
Figure 4 is an example of cross-language learning, where similar meaning words in cross-language contain similar vectors through learning cross-language word embeddings.
In unsupervised cross-language model research, pre-training of cross-language models has been a hot topic of interest. In IJCAI 2020, UniTrans[6] studied unsupervised cross-language entity recognition methods, and other researchers explored the unsupervised domain adaptation problem in cross-language models[7].
Compared to unsupervised methods, supervised methods perform better in cross-language research. Parallel corpora remain crucial for machine translation and other issues. In IJCAI 2020, supervised cross-language research included articles exploring the generation of cross-language paraphrases using parallel corpora[8], also known as Bilingual Generation, and studies attempting to solve semantic messaging issues using cross-language annotations[9].
Additionally, machine translation is also an important direction in cross-language research, with seven papers related to machine translation in IJCAI 2020.

3. Meta-Learning and Few-Shot Learning

In recent years, Meta-learning and Few-shot learning have gradually become research hotspots in academia. At IJCAI 2020, the application of these two methods in the NLP field was mainly explored, with Few-shot learning being widely applied in various classification tasks. Through Few-shot learning, neural networks can generalize to new categories with very few samples; while Meta-learning is an important means to achieve Few-shot learning, with algorithms like MAML (Model-Agnostic Meta-Learning) representing this.
In IJCAI 2020, several papers explored the application of Meta-learning and Few-shot learning in the NLP field, such as: QA via Meta-Learning[10], where the authors studied a question-answering model under complex knowledge bases using Meta-learning; in Few-shot learning research, some researchers explored the application of Few-shot learning in the medical + NLP field[11], classifying diseases based on case studies.

4. Transfer Learning

Transfer learning has long been a research hotspot in machine learning, and it was very active in IJCAI 2020. In the era of deep learning, how to transfer the knowledge learned to existing fields, especially how to transfer the knowledge contained in large-scale unlabeled corpora to various tasks, has received widespread attention from researchers.
In transfer learning, the most typical pattern is pre-training + fine-tuning, which has increasingly attracted the attention of NLP researchers with the popularity of BERT.
On the other hand, unlike the simple pre-training + fine-tuning pattern, many researchers are committed to exploring more advanced transfer learning frameworks. In IJCAI 2020, some studies discussed knowledge transfer under reading comprehension[12], while others researched text style transfer[13].
Transfer learning also includes domain adaptation at the dataset level. In IJCAI 2020, the article “Domain Adaptation for Semantic Parsing”[14] introduced domain adaptation for syntactic parsing, and these studies explore more advanced frameworks that deserve further tracking and attention.

5. Errors

In the NLP field, various biases can arise due to imbalanced datasets and various inherent biases, such as gender bias and racial bias. If these biases are not addressed, they can lead to discrimination among different groups.
Nine Research Hotspots in Natural Language ProcessingFigure 5: Example of Bias in NLP
For example, when we visualize word embeddings, we find that many words’ embeddings are correlated with gender. For instance, words like Brilliant and Genius are often more associated with males, while words like Dance and Beautiful are generally more associated with females. How to eliminate such biases is crucial for NLP algorithms.
In IJCAI 2020, several papers related to bias in NLP were presented. In the paper WEFE[15], the authors proposed a framework to test whether word embeddings are fair, and another paper presented new testing methods and platforms and conducted rigorous testing on the fairness of NLP models[16].

6. Knowledge Fusion

Despite the widespread use of large-scale corpora in NLP models, current NLP research lacks a structured understanding of large-scale corpora, especially in understanding complex languages. Therefore, in recent years, many researchers have begun to attempt to integrate structured knowledge, such as knowledge graphs, into the natural language processing framework, as seen in the ERNIE framework in ACL 2019[17].
Nine Research Hotspots in Natural Language Processing
Figure 6: Example of Knowledge Fusion
Figure 6 shows an example from the ERNIE paper. Here, solid lines represent existing knowledge, while the red or green dashed lines represent facts extracted from sentences. By incorporating structured knowledge, the extraction of entity relationships in sentences can achieve better results.
Many researchers are focusing on how to integrate knowledge into NLP models. In IJCAI 2020, there were ten related papers, divided into two categories:
(1) Enhancing the performance of original NLP tasks using knowledge graphs. Among them: using knowledge to improve reading comprehension[18]; using knowledge to enhance QA performance[19]; studies on causal detection of events[20]; introducing neural machine translation[21]; and research on dialogue generation[22].
(2) Constructing, completing, and generating knowledge using knowledge graphs. In the work related to knowledge graph construction and completion, Mucko[23] explored cross-modal knowledge reasoning; BERT-INT[24] studied the alignment of knowledge graphs; TransOMCS[25] investigated how to generate commonsense knowledge. These represent significant work in the construction and completion of knowledge graphs in IJCAI 2020.

Task Level Research Summary in NLP

1. Question Answering

In recent years, research on QA has gradually evolved from Simple QA to Complex QA. Simple QA can be understood as simple pattern matching, while Complex QA usually requires reasoning, even multi-hop reasoning. In IJCAI 2020, three papers explored the combination of knowledge graphs and QA to achieve more complex QA, namely, Mucko[26], Retrieve, Program, Repeat[27], and “Formal Query Building with Query Structure Prediction for Complex Question Answering over Knowledge Base”[28]. Research like LogiQA[29] and “Two-Phase Hypergraph Based Reasoning with Dynamic Relations for Multi-Hop KBQA”[30] discussed reasoning and multi-hop reasoning issues in QA. QA is often combined with other tasks to form multi-task frameworks to enhance the performance of multiple tasks.
In IJCAI 2020, research was conducted to combine QA with reading comprehension, entity relation extraction[31], and to combine QA with text generation tasks[32], which are good template studies that combine multi-tasking with QA.

2. Natural Language Generation

Natural language generation has broad application prospects and has been a research hotspot in recent years. Before deep learning became popular, traditional NLG required multiple steps, including content planning, information aggregation, and syntactic analysis. With the emergence of generative models like GAN and VAE, as well as sequence models such as Sequence2Sequence and Transformer, deep learning-based natural language generation has made great strides.
In IJCAI 2020, a significant amount of work focused on NLG, with twelve works studying generative issues. These articles were dispersed across different tasks and target generations, such as dialogue generation[33], paraphrase generation[34], response generation[35], as well as legal text generation[36] and comment generation[37]. Many studies also discussed general NLG generation frameworks that can be applied universally across tasks. Due to the rapid development of pre-trained models, research combining pre-trained models with NLG, such as ERNIE-GEN[38], has emerged, along with studies on generating structured data into text[39] and using NLG to generate balanced training data[40]. Therefore, natural language generation has comprehensive research in NLP, reflecting the popularity of the IJCAI conference in this field.

3. Multi-Modality

Multi-modality, especially the combination of text with other modalities such as audio, video, and images, has always been a hot research topic and is a very important part of IJCAI 2020, with a total of seven studies related to Multi-modality this year.
Visual Question Answering (VQA) is one of the research hotspots, with four papers in IJCAI 2020 exploring knowledge reasoning[41], self-supervision[42], and network design[43] to enhance QA performance through visual information. Research has been conducted on video semantic reasoning to achieve better retrieval results[44], as well as on visual-audio navigation[45], where models understand both language and images, locating the positions and key points described in the language to execute corresponding actions, thus avoiding bias issues caused by the environment and increasing navigation robustness. With the rapid development of BERT, many studies in IJCAI 2020 combined visual modalities with pre-trained models, achieving good results across various cross-modal tasks.
Clickto read the original text, join the Zhiyuan community for more discussions.

References

[1] Agostina Calabrese, Michele Bevilacqua, Roberto Navigli. EViLBERT: Learning Task-Agnostic Multimodal Sense Embeddings. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp.481-487.

[2] Daoyuan Chen, Yaliang Li, Minghui Qiu, et al. AdaBERT: Task-Adaptive BERT Compression with Differentiable Neural Architecture Search. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 2463-2469.

[3] Xiaobin Tang, Jing Zhang, Bo Chen, et al. BERT-INT: A BERT-based Interaction Model For Knowledge Graph Alignment. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3174-3180.

[4] Yunqiu Shao, Jiaxin Mao, Yiqun Liu, et al. BERT-PLI: Modeling Paragraph-Level Interactions for Legal Case Retrieval. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3501-3507.

[5] Zhuang Liu, Degen Huang, Kaiyu Huang, et al. FinBERT: A Pre-trained Financial Language Representation Model for Financial Text Mining. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Special Track on AI in FinTech. 2021. pp. 4513-4519.

[6] Qianhui Wu, Zijia Lin, Börje F. Karlsson, et al. UniTrans : Unifying Model Transfer and Data Transfer for Cross-Lingual Named Entity Recognition with Unlabeled Data. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3926-3932.

[7] Juntao Li, Ruidan He, Hai Ye, et al. Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language Model. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3672-3678.

[8] Mingtong Liu, Erguang Yang, Deyi Xiong, et al. Exploring Bilingual Parallel Corpora for Syntactically Controllable Paraphrase Generation. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3955-3961.

[9] Edoardo Barba, Luigi Procopio, Niccolò Campolungo, et al. MuLaN: Multilingual Label propagatioN for Word Sense Disambiguation. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3837-3844.

[10] Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, et al. Retrieve, Program, Repeat: Complex Knowledge Base Question Answering via Alternate Meta-learning. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3679-3686.

[11] Congzheng Song, Shanghang Zhang, Najmeh Sadoughi, et al. Generalized Zero-Shot Text Classification for ICD Coding. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 4018-4024.

[12] Xin Liu, Kai Liu, Xiang Li, et al. An Iterative Multi-Source Mutual Knowledge Transfer Framework for Machine Reading Comprehension. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3794-3800.

[13] Xiaoyuan Yi, Zhenghao Liu, Wenhao Li, et al. Text Style Transfer via Learning Style Instance Supported Latent Space. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3801-3807.

[14] Zechang Li, Yuxuan Lai, Yansong Feng, et al. Domain Adaptation for Semantic Parsing. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3723-3729.

[15] Pablo Badilla, Felipe Bravo-Marquez, Jorge Pérez. WEFE: The Word Embeddings Fairness Evaluation Framework. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 430-436.

[16] Pingchuan Ma, Shuai Wang, Jin Liu. Metamorphic Testing and Certified Mitigation of Fairness Violations in NLP Models. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 458-465.

[17] Dongling Xiao, Han Zhang, Yukun Li, et al. ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3997-4003.

[18] Xin Liu, Kai Liu, Xiang Li, et al. An Iterative Multi-Source Mutual Knowledge Transfer Framework for Machine Reading Comprehension. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3794-3800.

[19] Yongrui Chen, Huiying Li, Yuncheng Hua, et al. Formal Query Building with Query Structure Prediction for Complex Question Answering over Knowledge Base. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3751-3758.

[20] Jian Liu, Yubo Chen, Jun Zhao. Knowledge Enhanced Event Causality Identification with Mention Masking Generalizations. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3608-3614.

[21] Yang Zhao, Jiajun Zhang, Yu Zhou, et al. Knowledge Graphs Enhanced Neural Machine Translation. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 4039-4045.

[22] Sixing Wu, Ying Li, Dawei Zhang, et al. TopicKA: Generating Commonsense Knowledge-Aware Dialogue Responses Towards the Recommended Topic Fact. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3766-3772.

[23] Zihao Zhu, Jing Yu, Yujing Wang, et al. Mucko: Multi-Layer Cross-Modal Knowledge Reasoning for Fact-based Visual Question Answering. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 1097-1103.

[24] Xiaobin Tang, Jing Zhang, Bo Chen, et al. BERT-INT:A BERT-based Interaction Model For Knowledge Graph Alignment. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3174-3180.

[25] Hongming Zhang, Daniel Khashabi, Yangqiu Song, et al. TransOMCS: From Linguistic Graphs to Commonsense Knowledge. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 4004-4010.

[26] Zihao Zhu, Jing Yu, Yujing Wang, et al. Mucko: Multi-Layer Cross-Modal Knowledge Reasoning for Fact-based Visual Question Answering. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 1097-1103.

[27] Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, et al. Retrieve, Program, Repeat: Complex Knowledge Base Question Answering via Alternate Meta-learning. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3679-3686.

[28] Yongrui Chen, Huiying Li, Yuncheng Hua, et al. Formal Query Building with Query Structure Prediction for Complex Question Answering over Knowledge Base. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3751-3758.

[29] Jian Liu, Leyang Cui, Hanmeng Liu, et al. LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3622-3628.

[30] Jiale Han, Bo Cheng, Xu Wang. Two-Phase Hypergraph Based Reasoning with Dynamic Relations for Multi-Hop KBQA. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3615-3621.

[31] Tianyang Zhao, Zhao Yan, Yunbo Cao, et al. Asking Effective and Diverse Questions: A Machine Reading Comprehension based Framework for Joint Entity-Relation Extraction. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3948-3954.

[32] Weijing Huang, Xianfeng Liao, Zhiqiang Xie, et al. Generating Reasonable Legal Text through the Combination of Language Modeling and Question Answering. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3687-3693.

[33] Hengyi Cai, Hongshen Chen, Yonghao Song, et al. Exemplar Guided Neural Dialogue Generation. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3601-3607.

[34] Mingtong Liu, Erguang Yang, Deyi Xiong, et al. Exploring Bilingual Parallel Corpora for Syntactically Controllable Paraphrase Generation. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3955-3961.

[35] Shifeng Li, Shi Feng, Daling Wang, et al. EmoElicitor: An Open Domain Response Generation Model with User Emotional Reaction Awareness. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3637-3643.

[36] Weijing Huang, Xianfeng Liao, Zhiqiang Xie, et al. Generating Reasonable Legal Text through the Combination of Language Modeling and Question Answering. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3687-3693.

[37] Shijie Yang, Liang Li, Shuhui Wang, et al. A Structured Latent Variable Recurrent Network With Stochastic Attention For Generating Weibo Comments. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3962-3968.

[38]Dongling Xiao, Han Zhang, Yukun Li, et al. ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3997-4003.

[39] Yang Bai, Ziran Li, Ning Ding, et al. Infobox-to-text Generation with Tree-like Planning based Attention Network. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3773-3779.

[40] Yimeng Chen, Yanyan Lan, Ruinbin Xiong, et al. Evaluating Natural Language Generation via Unbalanced Optimal Transport. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 3730-3736.

[41] Zihao Zhu, Jing Yu, Yujing Wang, et al. Mucko: Multi-Layer Cross-Modal Knowledge Reasoning for Fact-based Visual Question Answering. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 1097-1103.

[42] Xi Zhu, Zhendong Mao, Chunxiao Liu, et al. Overcoming Language Priors with Self-supervised Learning for Visual Question Answering. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 1083-1089.

[43] Ganchao Tan, Daqing Liu, Meng Wang, et al. Learning to Discretely Compose Reasoning Module Networks for Video Captioning. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 745-752.

[44] Zerun Feng, Zhimin Zeng, Caili Guo, et al. Exploiting Visual Semantic Reasoning for Video-Text Retrieval. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 1005-1011.

[45] Yubo Zhang, Hao Tan, Mohit Bansal. Diagnosing the Environment Bias in Vision-and-Language Navigation. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. 2021. pp. 890-897.

Nine Research Hotspots in Natural Language Processing

Copyright Notice

This article is authorized to be reproduced from the WeChat public account “Beijing Zhiyuan Artificial Intelligence Research Institute”.

Join WeChat Group
To facilitate communication and discussion among researchers, this platform has established the following academic WeChat groups. Users who wish to join should add the editor’s WeChat ID: fittee_xb and leave a message indicating which group they want to join. The editor will add you to the group. Marketing personnel, please do not disturb.

Computer Science and Technology Academic Group

Optical Engineering and Technology Academic Group

Control Science and Technology Academic Group

Information and Communication Academic Group

Power Electronics Academic Group

Artificial Intelligence Academic Group

Follow Us ID: fitee_cae

This public account is the official WeChat account of the Chinese Academy of Engineering journal “Frontiers of Information and Electronic Engineering (English)” (SCI-E, EI indexed journal). Its functions include: disseminating academic articles from the journal; providing convenient services for associated scholars (readers, authors, reviewers, editors, etc.); publishing information related to academic writing, review, editing, and publishing; introducing academic figures, thoughts, and achievements in the field of information and electronic engineering, showcasing cutting-edge scientific research progress in this field; and providing a friendly interaction platform for scholars at home and abroad in this field.

Nine Research Hotspots in Natural Language Processing

Leave a Comment