Buffer neurons #43
KarimHabashy
started this conversation in
General
Replies: 1 comment 1 reply
-
ANNs can probably learn to localise but they're unlikely to use the same strategies. I agree that delays are a sticky issue and we'd prefer not to hand code them. I think this would be a great subject for your contribution if you're interested into looking into that. What I've been told (no references I'm afraid) is that trying to train delays with gradient descent is trickier than it seems. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Going through the examples (the biological ones) by Dan Goodman, the input neurons are 2 and to achieve coincidence detection delays are a must, since delays accommodate for the time domain. Now, lets talk about ANNs, I doubt ANNs with just 2 inputs will learn how to localize. Maybe, a recurrent neural network can do this, since it will have access to more time points. Another way , to avoid RNNs, is to use buffer neurons, that being storing time points and then representing them to the learning network. This involves some kind of memory on the neurons part. A short memory to memorize old time points.
If this sounds plausible, please let me know!
My aim is, I think using delays is not general enough and hand-coding, I think, is not how we want to build our learning models.
Beta Was this translation helpful? Give feedback.
All reactions