RNN Learns Suitable Hidden Dimensions with White Noise

RNN Learns Suitable Hidden Dimensions with White Noise

Abstract Neural networks need the right representations of input data to learn. Recently published in Nature Machine Intelligence, a new study examines how gradient learning shapes a fundamental property of representations in recurrent neural networks (RNNs)—their dimensionality. Through simulations and mathematical analysis, the study demonstrates how gradient descent guides RNNs to compress the dimensionality of … Read more

Discussing Low-Rank RNNs

Discussing Low-Rank RNNs

RNNs, or Recurrent Neural Networks, are an important theoretical tool in both machine learning and computational neuroscience. In today’s world dominated by transformers, many may have forgotten about RNNs. However, RNNs remain a fundamental type of neural network and will surely play a role in the era of large models. First, let’s look at the … Read more