But other researchers have firmly established that our brains don't work like that! Specifically, computers use metallic circuits where the electric signals travel at nearly the speed of light, 186,000 miles per second. In biological nervous systems, the electrical signals must cross "synapses" between nerve cells where the electric signal is carried by a chemical action. This, and the sheathed neurons that carry many such signals, regularly transfer electric signals at around 200 miles per HOUR, which is about 0.06 miles per second. Two hundred miles per hour (around 300 feet per second) is plenty fast enough for biological systems to work adequately, so we have good reaction times to danger and all the rest. But that signal transfer speed through the conductors in computers is around 3,000,000 times as fast as in biological systems such as our nervous systems and brains.
I recently learned of someone who claimed that a biological system in the human brain is supposed to commonly handle 100,000,000 bits of information per second. But no proof was presented for such an outrageous claim! It is unimaginable that any biological signal path could process anywhere near that many bits of data per second.
This limitation of signal speed in biological systems completely eliminates any possibility of computer-like speeds of processing. There is no way around that fact, because of the comparative slowness of signal movements in living beings. Therefore, a "single processor" paradigm for the human brain is apparently impossible.
Instead, it seems that we each must have a multitude of relatively independent "slow" processors that each accomplishes fairly narrow goals. The researchers that are pursuing this direction call them "cognitive modules". For example, it appears that quite a few separate organic "processors" receive and process the various sense stimuli that the body is exposed to. To a great extent, it seems that they are unaware of the activity of each other. In computer jargon, they would each be called dedicated purpose processors. Each module seems to be "programmed" to accomplish a single task in the brain. But a separate "central" biological computer (now called an interpreter, 2006) is continuously monitoring the PROCESSED RESULTS of these many separate computers, and then making system-wide responses as a result. The "dedicated processors" do not "bother" the central processor under normal conditions. When they have nothing that is out of the ordinary that has occurred above a certain threshhold of sensation, they create no output to send to it. This leaves the central processor (interpreter) to remain free to deal with whatever very limited input it actually needs to possibly react to. The dedicated processors (cognitive modules) only send output signals to the central computer when there is a CHANGE in some sensory input or condition, which is above some threshold level. This situation keeps the central processor from being overloaded by signal load, and actually keeps it generally pretty free to be able to concentrate its efforts on urgent matters.
Self-Sufficiency - Many Suggestions|
Public Services Home Page
An entirely separate computer might be monitoring that same wall for temperature variations. Other computers would be monitoring the other walls and floor and ceiling in similar ways. Another computer could be programmed to sense aromatic (smell) sensations, and another taste. Several might be involved with sight and light and color stimuli.
Let's say a fly lands somewhere on that west wall of the room. Suddenly, the very sensitive pressure sensors tell that specific computer that something has applied a minor localized pressure there. Maybe the heat sensor even tells ITS computer about a tiny amount of heat being radiated from the fly's body. The two very specialized computers each do their analysis. Each compare such inputs to some pre-determined threshold values, to decide whether the sensation is actually large enough to care about. The one soon sends a small amount of information to the central computer that a very localized pressure of a certain fraction of an ounce exists at a specific location on the wall. The central computer might also receive information from the temperature computer about the slight temperature rise in the same location. The central computer then combines the two and checks with yet more (memory) computers that have stored records of previous archived experiences, and quickly concludes that it is a fly at that location.
No ultra-fast computer is necessary for this sequence, closely following the rather slow processing speed of our organic brain computer! But it IS important that a BUNCH of virtually independent computers are each doing their work at sensing outside stimuli.
There could be hundreds of thousands of sensors covering the walls of the room, but as long as there were no sensor readings that had a DIFFERENTIAL greater than the threshold values, the central processor would not be bothered. This is akin to the very large numbers of nerve endings in all of the skin of our bodies, and that we are generally unaware of any sensations, except for when something changes.
There is a notable limitation in this approach. Say that a million flies were let into the room, and they kept landing on various sensors on the walls and then moving on. Each dedicated processor would be very busy, and each would be sending large numbers of output signals to the central processor. In this situation, the central processor would become completely overloaded. One possibility is that it could go into something like biological "shock" where it just stopped even trying to process ANY input signals. If the overwhelming quantities of input signals continued beyond a minute or two, the central processor (or all the dedicated processors) could choose a different reaction. The threshold value of signal strength could be increased. This would generally reduce the number of signals sent to the central processor or the number accepted by it. Essentially, this would result in the central processor again being able to handle its duty load, but it would also result in a sort of "numbing" of the sensory inputs due to the higher thresholds.
These arguments would possibly explain several biological activities, such as shock and the natural numbing that nearly always occurs a minute or so after an injury. Traditional straight-computer-like thinking cannot easily explain such things.
This reasoning implies that an assortment of identical processors would logically evolve into an organized structure. No necessary preference would need to exist regarding any specific processor as being the ultimate CENTRAL processor.
Another valuable insight seems available here. If the environment is sterile, with little external stimuli present, few of the computers / processors would be needed for adequate functioning, so most would be left as standby units for possible future need. A sterile environment would tend to inspire minimum brain activity. However, a complex environment, with large numbers and varieties of stimuli present, would likely inspire the use of a large number of such individual processors, to both handle the diversity of the environment and keep the CENTRAL processor from being overwhelmed. A diverse and active environment would tend to inspire extensive brain development and activity. This effect has long been seen in children.
Using this analogy, it seems that unborn babies are developing the first few processors, including establishing which one will be the CENTRAL processor. Stimuli such as singing and verbally reading and music might easily affect the rapidity of development of additional processors at that time. A newborn child generally is in an environment where overwhelming amounts of stimuli information is available to be processed, which would suggest that many additional dedicated processors would be assigned during that period. Again, the maximum diversity of stimuli in the environment would inspire the maximum assignment of additional stimuli processors.
It would seem that, for most people, by the age of six or so, the NUMBER of such dedicated independent processors might be pretty well established. From then on, learning would generally involve refinements in the precision and the processing methods of each of them. This might suggest that, for most people, the actual basic intellectual capability might be set by about that age. However, there is no actual reason for this to be true. If a child (or adult) would later deeply study ANY specific field (whether it's playing the piano or studying a foreign language or playing football or playing arcade games or virtually anything else) it seems likely that a brain could "activate" one or more of those "standby processors" for that purpose. There are actually there for that very purpose! If, due to a failure in some active processor, or due to severe brain injury, or due to a change of environment where survival requires learning new techniques and abilities, any of those standby processors could be activated, to improve the survival chances of that person (or animal).
This reasoning also explains the rather large brains that humans have, and also that much of those brains seem to be relatively inactive. Those sections are there and available, for possible use in the future as improvements to personal survival, due to the uncertainty of what a future might hold. In each of the eventualities mentioned above, a person's survival would greatly diminish without this "flexibility."
Regarding Artificial Intelligence (AI) research, these thoughts seem to suggest that much of current work may be directed in a somewhat different direction. Nearly all researchers approach the problem as involving faster and faster single central processors, which then directly deal with all stimuli. That approach will certainly work, but it seems that it will have several intrinsic limitations, specifically sensory overload of that central processor and the complexities of logic necessary in the programming.
A "distributed brain" as implied here is more like a community of dedicated slow, primitive processors working together. None, even the CENTRAL processor, requires extreme programming, and none requires excessive speed of processing. Each can plod along and minimal speeds, essentially totally unaware of all the Universe except its specific sensor inputs.
In a biological sense, it could be no other way! A single, ultra-complex brain could become non-functional due to even minor injuries or illnesses, and so the person or animal would have grave survival problems. A distributive processing approach would far better ensure the survival of that person or animal, even with occasional degradation of one or more of the separate processor units. Considering the special importance of the CENTRAL processor in the survival equation, it seems logical that a second and even a third redundant processor is active for that functionality. Along the same vein of logic, those two or three separate processors are likely to be physically separated, such that severe damage to one part of the skull might still permit the necessary survival processing to continue in a redundant CENTRAL processor in a less damaged area.
Again, regarding AI, I can imagine dedicating 100 of the 200 active processors to various areas of visual stimuli. A few of those could ONLY be alert to far peripheral vision. As long as nothing unexpected occurs in that visual area, that processor would not send much information to the central processor. This would permit a "general awareness" of that peripheral region without requiring the CENTRAL processor to use up significant time regarding it. In the even that some unexpected thing is sensed there, an appropriate message is created by that processor and sent to the CENTRAL processor, so it can decide what response is appropriate. Many of the processors associated with visual input would be specifically occupied with areas inside a narrow zone of the central attention focus of the eyes.
C Johnson, Theoretical Physicist, Physics Degree from Univ of Chicago