Understanding BERT: Principles, Code, Models, and Fine-tuning Techniques

Understanding BERT: Principles, Code, Models, and Fine-tuning Techniques

In October 2018, the BERT model launched by Google made a stunning impact, sweeping various rankings and even surpassing human baseline scores, achieving a milestone breakthrough in the field of NLP. Today, for NLP algorithm engineers, BERT has become an essential tool. “What if there’s too little data?” — “Just fine-tune BERT!” “What if RNN … Read more

Is BERT Perfect? Do Language Models Truly Understand Language?

Is BERT Perfect? Do Language Models Truly Understand Language?

Machine Heart Release Author: Tony, Researcher at Zhuiyi Technology AI Lab Everyone knows that language models like BERT have been widely used in natural language processing. However, a question sometimes arises: do these language models truly understand language? Experts and scholars have different opinions on this. The author of this article elaborates on this topic … Read more

How Well Can BERT Solve Elementary Math Problems?

How Well Can BERT Solve Elementary Math Problems?

©PaperWeekly Original · Author|Su Jianlin Unit|Zhuiyi Technology Research Direction|NLP, Neural Networks ▲ The Years of “Chickens and Rabbits in the Same Cage” “Profit and loss problems”, “age problems”, “planting trees problems”, “cows eating grass problems”, “profit problems”… Have you ever been tormented by various types of math word problems during elementary school? No worries, machine … Read more

Innovations in the Era of BERT: Comparison of BERT Application Models and More

Innovations in the Era of BERT: Comparison of BERT Application Models and More

Author: Dr. Zhang Junlin, Senior Algorithm Expert at Sina Weibo Zhihu Column:Notes on the Frontiers of Deep Learning This article has been authorized, you can click “Read the original” at the end of the article to go directly: https://zhuanlan.zhihu.com/p/65470719 In the past two months, I have been paying close attention to the current application status … Read more

How BERT Understands Language: Google’s LIT Interactive Platform

How BERT Understands Language: Google's LIT Interactive Platform

New Intelligence Report Editor: QJP [New Intelligence Guide] As NLP models become increasingly powerful and are deployed in real-world scenarios, understanding the predictions made by these models becomes more crucial. Recently, Google released a new language interpretability tool (LIT), which is a new approach to explain and analyze NLP models, making their results less of … Read more

Summary of BERT Related Papers, Articles, and Code Resources

Summary of BERT Related Papers, Articles, and Code Resources

BERT has been very popular recently, so let’s gather some related resources, including papers, code, and article interpretations. 1. Official Google resources: 1) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding Everything started with this paper released by Google in October, which instantly ignited the entire AI community, including social media: https://arxiv.org/abs/1810.04805 2) GitHub: … Read more

When Bert Meets Keras: The Simplest Way to Use Bert

When Bert Meets Keras: The Simplest Way to Use Bert

Author: Su Jianlin Research Direction: NLP, Neural Networks Personal Homepage: kexue.fm Bert is something that probably doesn’t need much introduction. Although I’m not a big fan of Bert, I must say it has indeed caused quite a stir in the NLP community. Nowadays, whether in Chinese or English, there is a plethora of popular science … Read more

K-BERT Model: Knowledge Empowerment with Knowledge Graphs

K-BERT Model: Knowledge Empowerment with Knowledge Graphs

Author丨Zhou Peng Affiliation丨Tencent Research Direction丨Natural Language Processing, Knowledge Graph Background In the past two years, unsupervised pre-trained language representation models such as Google’s BERT have achieved remarkable results in various NLP tasks. These models are pre-trained on large-scale open-domain corpora to obtain general language representations and then fine-tuned on specific downstream tasks to absorb domain-specific … Read more

NLP Pre-training Models in the Post-BERT Era

NLP Pre-training Models in the Post-BERT Era

This article introduces several papers that improve the pretraining process of BERT, including Pre-Training with Whole Word Masking for Chinese BERT, ERNIE: Enhanced Representation through Knowledge Integration, and ERNIE 2.0: A Continual Pre-training Framework for Language Understanding. Note: These papers all implement different improvements to the masking of BERT’s pretraining phase, but do not modify … Read more