Page 1 of 1

Scaling up in Artificial Intelligence

Posted: Sat Oct 13, 2012 8:16 pm
by Kuznetzova
There is a pervasive idea among young people in regards to engineering an Artificially Intelligent agent. A cluster of them have begun to congregate in places such as reddit forums and freenode chat rooms. In this particular situation, we are not deriding the youngsters for having ignorance of a complicated subject matter like AI -- rather, in this case they are just no longer thinking clearly, and the arguments they are making are not thought through. In short, they are being stupid. Ignorance is excusable. Ignorance should not be derided. But stupidity is a different issue.

They have become bewitched by recent breakthroughs in Machine Vision; in particular recent work being done by Deep Learning Networks and Stacked Autoencoders applied to problems in machine vision. The way the media reports these things is not helping either. The pop science media circuit is already known to inflate stories whenever something unusual happens in a laboratory. Their excitement, inspired by these breakthroughs has seduced them, rather like a herd of lemmings, onto a Neural Network fandom bandwagon.

Attempts to quell their hysteria are met with arguments that just don't work when thought about for more than a few minutes. Peppered among the recent conversations is the age-old argument that goes: "Well, the brain is a neural network!". That statement (while legally true in isolation) is ridiculous when placed in the context of what the point the youngsters are trying to make. They invoke that statement because they are defending the thesis that merely SCALING UP their little toy models will eventually give rise to a brain, after enough scaling up. That argument is stupid, irrational, and baseless. More dangerously - it is becoming pervasive. "Scaling up" in the context of this essay should mean simply adding more neurons to the network.

Two facts should rise above the fog of the recent machine vision hysteria.
(1.) An intelligent agent must engage in far more mental tasks than just perceiving the immediate environment. While perceiving the immediate environment is a crucial part of intelligence, any given agent must also plan into the future, engage in reasoning, and draw intelligible inferences. It also must navigate its body, which requires a particular kind of spatial memory independent of the raw perception of objects. Control of its own body (eg. fine motor control of the fingers) is also a special problem requiring a Procedural Memory which is quite alien to perception.
(2.) No brain of any animal or insect in nature lacks modularity. The more intelligent mammals (whom we should be concentrating on exclusively) have brains that are extremely modular in their construction.

In a purely neuroscientific context, the statement "The brain is a neural network" is technically false. The human brain is a collection of specialized modules which are connected together in some clever way that is not understood by science in 2012 AD. If science does not understand the connectivity between these modules, we can rest assured that college freshman on reddit and freenode don't either.

Image

It is far too early for us to speak optimistically about emulating a brain. At their very best, Deep Learning Networks and Stacked Autoencoders are performing a function that is probably completely isolated in early layers of the visual cortex of primates. Even the tentative statement "researchers create a visual cortex" is not accurate. The visual cortex in humans receives only 6% of its incoming connections from the eyes. The rest (94%) of the incoming connections come from other parts of the brain. Therefore, flippant proposals suggesting that "The visual cortex is a stacked autoencoder" can be dismissed as silly.

An argument for the use of neural networks in principle also appears occasionally among the fray. The principled argument suggests that artificial neural networks perform some sort of function that could not be feasibly performed by otherwise "regular" software in a computer. (To give a marginal example to ground this: consider a person claiming that outdoor navigation would be performed somehow better by a robot using a neural network, as compared to robot that uses state-of-the-art localization and mapping software.) That argument is at least principled, and in my opinion, perfectly rational. Unfortunately, when pressed for a defense of the position, the reddit/freenode lemmings deliver an equally shallow response. They have been consistently unable to clearly explain a particular function which cannot be done with software that can be done instead with a neural network. The conspicuous absence of particulars here indicates that their argument is de facto: "The brain is a neural network and the brain can do it so use a neural network." But the proceeding paragraphs show how silly that position really is. A more coherent response would elaborate on the peculiarities of the memory of networks and the peculiarities of their connectivity. Both are sadly lacking in their conversations.

Re: Scaling up in Artificial Intelligence

Posted: Sun Oct 14, 2012 10:28 am
by Toadny
Kuznetzova wrote: Both are sadly lacking in their conversations.
Why are you troubling to argue with them then Kuznetzova, and why here?

"The principle of charity is a methodological presumption made in seeking to understand a point of view whereby we seek to understand that view in its strongest, most persuasive form before subjecting the view to evaluation."

Re: Scaling up in Artificial Intelligence

Posted: Mon Oct 15, 2012 1:35 am
by sideshow
I suppose the OP would have been a lot clearer if all that ad hominem versus some young people wasn't there. But the points made have merit. The neural network, or any other mechanistic model of the mind based on the brain, suffers from incoherence that cannot ever be resolved. The number of factors entered into the creation of these models is insufficient to approach a description of a very private phenomenon.

It might be useful to first imagine if the model describes my roomba, a sea slug, a rock pigeon, or my cat, before serious discussion.