We’ve included a selection of readings for your enjoyment. These papers span historical and recent work in neural network modeling that intersect cognitive science and generative modeling.
For an intuitive explanation of Variational Autoencoders check out this fairly quick tutorial.
If you liked the above, check out this important paper on what makes VAEs work by Kingma & Welling (2013).
See even more recent work examining VAE models that can incorporate multiple modalities Wu & Goodman (2018).
Here’s a Bayesian view of neural network models as presented in Ch. 5 Mackay’s “Information Theory, Inference and Learning Algorithims” textbook.
For the brave, have a look at Radford Neal’s original thesis on Bayesian neural networks.
If you like the historical stuff, check out the origins of the backprop algorithm in this 1986 Nature paper, Rumelhart et al. (1986).
For some, one Rumelhart paper is not nearly enough – check out this early work examining representation learning in autoencoders Rumelhart & Todd (1991).