NLP Pre-training Models in the Post-BERT Era

NLP Pre-training Models in the Post-BERT Era

This article introduces several papers that improve the pretraining process of BERT, including Pre-Training with Whole Word Masking for Chinese BERT, ERNIE: Enhanced Representation through Knowledge Integration, and ERNIE 2.0: A Continual Pre-training Framework for Language Understanding. Note: These papers all implement different improvements to the masking of BERT’s pretraining phase, but do not modify … Read more

Comparative Evaluation of BERT and ERNIE in NLP

Comparative Evaluation of BERT and ERNIE in NLP

Yunzhong from Aofeisi Quantum Bit Report | WeChat Official Account QbitAI How do BERT and ERNIE, the two most关注的 models in the NLP field, perform? Recently, someone conducted a comparison test, and the results were surprising and delightful in a Chinese language environment. What are the specific details? Let’s take a look at this technical … Read more