Strategies for Saving GPU Memory in PyTorch

Strategies for Saving GPU Memory in PyTorch

Click on the above “Beginner’s Guide to Vision” to select and add “Star” or “Pin“ Heavy content delivered first Author | OpenMMLab Editor | Jishi Platform Original link: https://zhuanlan.zhihu.com/p/430123077 Introduction With the rapid development of deep learning, the explosion of model parameters has raised increasingly high requirements for GPU memory capacity. How to train models … Read more

Summary of Memory Saving Strategies in PyTorch

Summary of Memory Saving Strategies in PyTorch

Click the "Xiaobai Learns Vision" above, select "Star" or "Top" Heavyweight content delivered at the first time Source丨https://zhuanlan.zhihu.com/p/430123077 Introduction With the rapid development of deep learning, the explosive growth of model parameters has put higher demands on the memory capacity of GPUs. How to train models on GPUs with small memory capacity has always been … Read more

Reducing RNN Memory Usage by 90%: University of Toronto’s Reversible Neural Networks

Reducing RNN Memory Usage by 90%: University of Toronto's Reversible Neural Networks

Selected from arXiv Authors: Matthew MacKay et al. Translated by: Machine Heart Contributors: Gao Xuan, Zhang Qian Recurrent Neural Networks (RNNs) achieve the best current performance in processing sequential data, but they require a large amount of memory during training. Reversible Recurrent Neural Networks provide a way to reduce the memory requirements for training, as … Read more