up:: ๐ฅ Sources
type:: #๐ฅ/๐
status:: #๐ฅ/๐ฅ
tags:: #on/podcasts
topics::
Author:: Eye On A.I.
Title:: Setting the Stage for 2023
URL:: https://share.snipd.com/episode/bcda98ea-9969-43ba-a6af-86c5c0e11694
Reviewed Date:: 2023-01-16
Finished Year:: 2023
Setting the Stage for 2023
Highlights
The Artificial Neural Network
Summary:
The hottest thing right now is probing the artificial neural network with the same experiments that neuroscience is doing in the brain. We're using these deep learning networks to create what's called word embeddings. And this is a way for language, for example, of certain words coming in as a sentence to be represented in the semantic space. Once you've done that, you can use it for answering questions from articles. It will figure out the semantics and what's going on, and it'll answer the question. They went in, and they said, well, how was the sentence represented? And so what they did is they looked at the pattern of activity. That tells you a little bit about
Transcript:
Speaker 1
Information is circulating, and decisions are being made. It's a dynamical system. It's an incredibly complex dynamical system, ultimately. And we're faced now with an interesting problem, which is we can see how the problem was solved by looking at the input and the output. But what we really want to know is what's going on inside. What is it learned? And the hottest thing right now is probing the artificial neural network with the same experiments that neuroscience is doing in the brain. So you put your electrode onto one of the units, and then you see what it responds to. When it responds, is it firing before the decision or after? And that gives you hints. It tells you a little bit about how the information is flowing through the network. And so we're doing that now with artificial networks. And it's really exciting. We're using these deep learning networks to create what's called word embeddings. And this is a way for language, for example, of certain words coming in as a sentence to be represented in the semantic space. And once you've done that, you can use it for answering questions from articles. It's amazing. It'll answer the question. It will figure out the semantics and what's going on, and it'll answer the question. They went in, and they said, well, how was the sentence represented? And so what they did is they looked at the pattern of activity. It's in a million dimensional space. It's huge.
Is the Brain Doing the Equivalent of Back Propagation?
Summary:
The Boltzmann machine has another assumption that we make, which is that every pair of units has reciprocal connections with equal strength. It's approximately true within the cortex, but it's not exactly true. And so it may be that the brain is somewhere between a BoltzmannMachine and a back prop net. We're just scratching the surface here. Deep learning is able to do things that are unaccountable. Like this language example I was giving you. Nobody would have been able to predict that. Even back in the 80s, if you had asked me, well, that's unlikely. The language is too difficult. No, it wasn't. Now we have to figure out why
Transcript:
Speaker 1
Well, you know, it's doing something that may be equivalent. And now, you see, this gives us a real strong hypothesis. How could the brain do it? You're right, it's not going to do the same algorithm, but there is information. There are feedback connections. There are more feedback connections and feed forward connections in this hierarchy. But nobody knows what information is being carried. It's a mystery. And so now it gives us a hypothesis. Let's go in. Maybe that information is giving the earlier layer information about error. How to change the weights. It may not be the back prop way of doing it, but there may be an equivalent way of doing it. But isn't that happening in the Boltzmann machine as well? You were saying that the information during the sleep period is... Right. Well, that's an example. Okay, that's an example. Now, the Boltzmann machine has another assumption that we make, which is that every pair of units has reciprocal connections with equal strength. Which is a pretty strong assumption. It's approximately true within the cortex, but it's not exactly true. Because of that, it is doing the equivalent of back propagation. Right. It's doing it locally. It doesn't need to have the information flowing down. It's all being done at the same time over the whole network. And so it may be that the brain is somewhere between a Boltzmann machine and a back prop net. Right. And this actually leads to a really exciting new area of research, which is of all possible computing systems that are parallel, that have this ability to learn and the ability to take in lots of data and be able to class. Or predict. We're just scratching the surface here. Deep learning is able to do things that are unaccountable. We don't understand how it does so well. Like this language example I was giving you. Nobody would have been able to predict that. Even back in the 80s, if you had asked me, I would have said, well, that's unlikely. It's too difficult. The language is too difficult. No, it wasn't. And now we have to figure out why.
Unsupervised Learning Is the Missing Link
Transcript:
Speaker 4
Jan Lecun talked about the development of convolutional neural nets, and how unsupervised learning is the missing link in getting us to higher forms of intelligent machines, explaining his version of self-supervised learning.
Speaker 3
There was a need for being able to build multi-layer neural nets. They just didn't figure out how to do it. And it's probably mostly because they had the wrong neurons. The neurons people were using in neural nets at the time were binary neurons, and that's incompatible with things like back prop. And so the idea just didn't come up. Even though the basic idea of doing back prop actually existed in the context of optimal control since the 60s, I started thinking about how can we train multi-layer networks that kind of stumbled on an idea which was very close to back prop. So this was then 1983 or so, which was the idea of using the weights that I used in the neural net for, using backwards.
up:: ๐ฅ Sources