The Talking Machines

There’s a great new Machine Learning podcast out called Talking Machines. They only have two episodes out but they are quite serious. They have traveled to NIPS and interviewed researchers, they have discussed A* sampling, and more.

On the most recent episode, they interviewed Ilya Sutskever on Deep Learning. He had two interesting things to say.

First, that DL works well (now) partially because we have figured out the appropriate initialization conditions: weights between units should be small, but not too small (specifically, the eigenvalues of the weight matrix should be ~1). This is what allows the backpropagation to work. Given that real neural networks don’t use backprop, how much thought should neuroscientists give to this? We know that homeostasis and plasticity keep things in a balanced range – you don’t want epilepsy, after all.

Second, that recursion in artificial networks is mostly interesting for temporal sequences. Recurrent connections – such as to the thalamus – always seems to be something that is understudied (or at least, that I don’t pay enough attention to).

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s