Prompt-Based Contrastive Learning for Sentence Representation

Prompt-Based Contrastive Learning for Sentence Representation

This article is approximately 1100 words long and is recommended to be read in 5 minutes. This article proposes using prompts to capture sentence representations. Although language models like BERT have achieved significant results, they still perform poorly in terms of sentence embeddings due to issues of sentence bias and anisotropy; We found that using … Read more

In-Depth Guide to Prompt Learning and Tuning

In-Depth Guide to Prompt Learning and Tuning

MLNLP community is a well-known machine learning and natural language processing community in China and abroad, targeting NLP graduate students, university teachers, and corporate researchers. The vision of the community is to promote communication and progress between the academia and industry of natural language processing and machine learning, especially for the advancement of beginners. Reprinted … Read more

Application of Prompt in NER Scenarios

Application of Prompt in NER Scenarios

MLNLP ( Machine Learning Algorithms and Natural Language Processing ) community is a well-known natural language processing community both domestically and internationally, covering NLP graduate students, university teachers, and researchers in enterprises. The vision of the community is to promote communication between the academic and industrial sectors of natural language processing and machine learning, as … Read more

Precise Induction of Language Model Knowledge Through Prompt Construction

Precise Induction of Language Model Knowledge Through Prompt Construction

NLP Paradigm Evolution Fully Supervised Learning (Non-neural Network): Trains a specific task model only on the input-output sample dataset for the target task, heavily relying on feature engineering. Fully Supervised Learning (Neural Network): Combines feature learning with model training, shifting the research focus to architecture engineering, which designs a network architecture (like CNN, RNN, Transformer) … Read more

From CLIP to CoOp: A New Paradigm for Visual-Language Models

From CLIP to CoOp: A New Paradigm for Visual-Language Models

Follow the public account “ML_NLP“ Set as “Starred” to receive valuable content promptly! Reprinted from | Smarter Recently, a new paradigm of Prompt has been proposed in the NLP field, aiming to revolutionize the original Fine-tuning method. In the CV field, Prompt can actually be understood as the design of image labels. From this perspective, … Read more

From Text Matching to Semantic Relevance

From Text Matching to Semantic Relevance

Introduction Text similarity is a fundamental task in the industrialization of NLP. Many applications require calculating the degree of similarity between two texts, including deduplication of similar texts in text retrieval, matching queries with standard template questions in question-answering systems, and semantic judgment of sentence pairs. This task can be categorized based on different criteria: … Read more

5-Minute NLP Series: Word2Vec and Doc2Vec

5-Minute NLP Series: Word2Vec and Doc2Vec

Source: Deephub Imba This article is approximately 800 words long and is recommended to be read in 5 minutes. This article mainly introduces <strong>Word2Vec</strong> and <strong>Doc2Vec</strong>. Doc2Vec is an unsupervised algorithm that learns embeddings from variable-length text segments (such as sentences, paragraphs, and documents). It first appeared in the paper Distributed Representations of Sentences and … Read more

Overview of Human Language Understanding and Reasoning

Overview of Human Language Understanding and Reasoning

Christopher Manning Currently a professor at Stanford University and head of the Stanford Natural Language Processing Group, Christopher Manning is a renowned scholar in the field of natural language processing. He is a fellow of prestigious international academic organizations such as the ACM, AAAI, and ACL, and has won best paper awards at top international … Read more

Performance Comparison of Text Embedding Techniques: GPT-3, BERT, GloVe, and Word2Vec

Performance Comparison of Text Embedding Techniques: GPT-3, BERT, GloVe, and Word2Vec

Source: DeepHub IMBA This article is about 3000 words long and is recommended to read in 6 minutes. With the latest advancements in NLP (Natural Language Processing), OpenAI's GPT-3 has become one of the most powerful language models on the market. On January 25, 2022, OpenAI announced an embedding endpoint (Neelakantan et al., 2022). This … Read more

Understanding Word2Vec: A Deep Dive into Word Vectors

Understanding Word2Vec: A Deep Dive into Word Vectors

Summary of Word2vec References First, let me briefly describe my deep dive into Word2vec: as per usual, I started by reading Mikolov’s two original papers on Word2vec, but I found myself still confused after finishing them. The main reason is that these two papers omitted too much theoretical background and derivation details. I then dug … Read more