RNN Learns Suitable Hidden Dimensions with White Noise
Abstract Neural networks need the right representations of input data to learn. Recently published in Nature Machine Intelligence, a new study examines how gradient learning shapes a fundamental property of representations in recurrent neural networks (RNNs)—their dimensionality. Through simulations and mathematical analysis, the study demonstrates how gradient descent guides RNNs to compress the dimensionality of … Read more