Understanding Google’s Powerful NLP Model BERT

▲ Click on the top Leiphone to follow

Understanding Google's Powerful NLP Model BERT

Written by | AI Technology Review

Report from Leiphone (leiphone-sz)

Leiphone AI Technology Review notes: This article is an interpretation provided by Pan Shengfeng from Zhuiyi Technology based on Google’s paper for AI Technology Review.

Recently, Google researchers achieved state-of-the-art results on 11 NLP tasks with the new BERT model, which has sparked considerable discussion in both the natural language processing academia and industry. The authors trained the language model on a corpus of 3.3 billion texts and fine-tuned it on various downstream tasks, resulting in the best performance to date across different tasks, with some results significantly improving upon previous best scores. This research is a continuation of a new hot direction in deep learning for natural language processing that began earlier this year.

BERT’s

Leave a Comment