A type of neural networks designed to approach time-dependent and/or sequence-dependent problems
Edit me

What are Recurrent Neural Networks (RNN)?

Recurrent Neural Networks (RNNs) are a type of neural networks designed to approach time-dependent and/or sequence-dependent problems. RNN are “recurrent” in the sense that they can revisit or reuse past states as inputs to predict the next or future states. To put it plainly, they have memory. Indeed, memory is what allows us to incorporate our past thoughts and behaviors into our future thoughts and behaviors.

The first successful example of a RNN with backpropagation was introduced by Jeffrey Elman, the so-called Elman Network (Elman, 1990). Since the Elman network, outstanding progress has been made with RNN in both basic research and practical applications. RNNs today are used for tasks like machine translation, robotics, speech recognition, speech production, time series prediction, sequential decision making, modeling of brain activity, and many more.

Further Learning

Video

  • Recurrent Neural Networks - MIT 6.S191

Applied papers

Online tutorials

Theory papers and book chapters

Contributors