DeepSeek-V2: A Powerful MoE Language Model
Abstract We propose DeepSeek-V2, a powerful Mixture of Experts (MoE) language model characterized by economical training and efficient inference. It has a total of 236 billion parameters, with 21 billion parameters activated per token, and supports 128K tokens of context length. DeepSeek-V2 adopts innovative architectures such as Multi-head Latent Attention (MLA) and DeepSeekMoE. MLA ensures … Read more