They have become bewitched by recent breakthroughs in Machine Vision; in particular recent work being done by Deep Learning Networks and Stacked Autoencoders applied to problems in machine vision. The way the media reports these things is not helping either. The pop science media circuit is already known to inflate stories whenever something unusual happens in a laboratory. Their excitement, inspired by these breakthroughs has seduced them, rather like a herd of lemmings, onto a Neural Network fandom bandwagon.
Attempts to quell their hysteria are met with arguments that just don't work when thought about for more than a few minutes. Peppered among the recent conversations is the age-old argument that goes: "Well, the brain is a neural network!". That statement (while legally true in isolation) is ridiculous when placed in the context of what the point the youngsters are trying to make. They invoke that statement because they are defending the thesis that merely SCALING UP their little toy models will eventually give rise to a brain, after enough scaling up. That argument is stupid, irrational, and baseless. More dangerously - it is becoming pervasive. "Scaling up" in the context of this essay should mean simply adding more neurons to the network.
Two facts should rise above the fog of the recent machine vision hysteria.
(1.) An intelligent agent must engage in far more mental tasks than just perceiving the immediate environment. While perceiving the immediate environment is a crucial part of intelligence, any given agent must also plan into the future, engage in reasoning, and draw intelligible inferences. It also must navigate its body, which requires a particular kind of spatial memory independent of the raw perception of objects. Control of its own body (eg. fine motor control of the fingers) is also a special problem requiring a Procedural Memory which is quite alien to perception.
(2.) No brain of any animal or insect in nature lacks modularity. The more intelligent mammals (whom we should be concentrating on exclusively) have brains that are extremely modular in their construction.
In a purely neuroscientific context, the statement "The brain is a neural network" is technically false. The human brain is a collection of specialized modules which are connected together in some clever way that is not understood by science in 2012 AD. If science does not understand the connectivity between these modules, we can rest assured that college freshman on reddit and freenode don't either.

It is far too early for us to speak optimistically about emulating a brain. At their very best, Deep Learning Networks and Stacked Autoencoders are performing a function that is probably completely isolated in early layers of the visual cortex of primates. Even the tentative statement "researchers create a visual cortex" is not accurate. The visual cortex in humans receives only 6% of its incoming connections from the eyes. The rest (94%) of the incoming connections come from other parts of the brain. Therefore, flippant proposals suggesting that "The visual cortex is a stacked autoencoder" can be dismissed as silly.
An argument for the use of neural networks in principle also appears occasionally among the fray. The principled argument suggests that artificial neural networks perform some sort of function that could not be feasibly performed by otherwise "regular" software in a computer. (To give a marginal example to ground this: consider a person claiming that outdoor navigation would be performed somehow better by a robot using a neural network, as compared to robot that uses state-of-the-art localization and mapping software.) That argument is at least principled, and in my opinion, perfectly rational. Unfortunately, when pressed for a defense of the position, the reddit/freenode lemmings deliver an equally shallow response. They have been consistently unable to clearly explain a particular function which cannot be done with software that can be done instead with a neural network. The conspicuous absence of particulars here indicates that their argument is de facto: "The brain is a neural network and the brain can do it so use a neural network." But the proceeding paragraphs show how silly that position really is. A more coherent response would elaborate on the peculiarities of the memory of networks and the peculiarities of their connectivity. Both are sadly lacking in their conversations.