Fooling Ourselves with AI
I’ve worked with neural networks a lot over the years and the thing that keeps popping out of them is that they are NOT the basis of human intelligence or sentience. Consciousness, as we know it, cannot emerge from the primitive models of neural networks that we play with on computers today. They’re very good at recognizing patterns and are very informative about how animal motor skills may be trained but they can’t learn to think or have ideas of their own. Humans like to pretend that consciousness as we experience it is uniquely human and that it popped into existence some 40,000 years ago with the emergence of human tools and cave drawings. One can hardly blame creationists for their skepticism of evolutionary theory with this common view of the human condition. As a scientist I would observe that nothing as complex and nuanced as intelligence can be the product of very rapid evolutionary phenomena. I have written may articles about my skepticism of modern Artificial Intelligence hype. I’ve heard extremely intelligent and ostensibly knowledgeable authorities on the subject claim that we are on the verge of experiencing a Skynet like revolution in Artificial Intelligence that will wipe out all jobs and maybe the human species. In reality I have seen zero indication that any of the AI toys currently being played with by our most advanced AI researchers are in any danger of “waking up” and taking over from us.
One of the most interesting properties of human intelligence is our ability to “code” which can be said to be a vastly more advanced version of our ability to make tools. This property of tool making and coding is very sophisticated and cannot be rationally explained by a recent genetic fluke or even the slow plodding evolution of animal nervous systems. I’ve said before that human minds are not MADE by human minds, they are constructed by witless cells. It is CELLS that exhibit the closest analog to self-programming and self-design that we actually observe in nature. Before delving into the subject of the genetic basis for intelligence, it’s probably worth discussing the evolutionary purpose and nature of neurons first. Neurons are an interesting combination of sensor and motion control system. Our earliest examples of neurons find them near the mouths of simple organisms and as sensors for locating and collecting food in a liquid medium. Single celled organisms were able to propel themselves and engage in complex motion and feeding behavior without neurons but as cells began to group themselves into organisms, neurons emerged almost immediately as a means of communication and coordination.
Here we see an Amoeba, a single celled organism, sensing and consuming paramecium another single celled organism capable of rapid propulsion and apparently even the systematic ability to try to escape consumption by the amoeba. One might observe that these cells are already exhibiting neuron like behavior in their sensory and motion control properties. To engage in this kind of motion they obviously also need a pretty controlled source of energy production and management. We find neurons in “predatory” organisms that hunt and eat other organisms. Chlorophyll based organisms like algae don’t engage in these behaviors presumably because they can eat “sun light” and gases that can be captured without engaging in any complex behavior.
Jellyfish and their near relatives are perhaps the most interesting primitive organism that makes maximum use of a primitive nervous system.
“All jellyfish have a ring of muscle that encircles the bottom of the bell, which is the main component of the jellyfish anatomy. The bell is hollow and open-ended, allowing it to fill with water. The muscles around the bell contract, squeezing out the water and propelling the jellyfish forward, upward or downward, depending on the position of the bell at the time of compression.”
Jellyfish are currently believed to be the world’s oldest example of an “animal”. Cells specializing in the role of signaling sensors appear to have evolved as an adaptation of multi-cellular animals needing to engage in complex coordinated movement in response to sensory input. Now the question is… do Jellyfish nervous systems need to be trained or are they born pre-trained genetically? If they weren’t born pre-trained, how they learned may be fairly straight forward. They taste something edible and they send a signal which triggers global motion in the direction of the food source. If you were a jellyfish with tentacles drifting all around you and you got conflicting signals from different directions, focusing your motion in the direction of the greatest number of food signals would probably be a reasonable response to maximize calorie consumption. This behavior might involve a relatively short-term version of memory constituting a pretty basic neural network characterized by immediate stimulus and an automated, genetically programmed response.
Here’s the important point. It would seem likely that long before complex animals evolved nervous systems that could “learn” and “remember” complex patterns from experience, the earliest organisms with nervous systems probably relied on genetically programmed behaviors. Even if the organisms neurons directly linked to and triggered muscular reflex responses, the configuration of neurons and behavior was genetically determined. One might wildly extrapolate to assert that over millions of years of evolution, really complex muscular responses to really complex neural configurations emerged somehow giving rise to neural networks that learn language, compose poetry and invent Special Relativity. Does that sound like an implausible leap? It does to me. In a neural network “memory” is represented by the weighting of static action-potential functions within each neuron. Only recently have more dynamic approaches to neural networking included the ability for the network to grow or change it’s own connectivity. The collective weight settings for each neuron in the network constitutes the networks entire “memory” of anything.
It used to be a popular belief that our memories were an “electrical state” contained in our neurons and not necessarily encoded in any persistent cellular data structures.
“After consolidation, long-term memories are stored throughout the brain as groups of neurons that are primed to fire together in the same pattern that created the original experience, and each component of a memory is stored in the brain area that initiated it”
How exactly does that work? According to the neuroscience literature, the number of neuroreceptors physically installed at the synapse is part of how memories are represented.
“Neuroreceptors, including the AMPA receptor, are 3D cylinder-shaped protein complexes about 8 nanometers in size that are made of up tens of thousands of atoms. They physically move around the neuron and are mechanically installed via a process called “receptor trafficking”.”
It all sounds very robotic and automated doesn’t it? Easy to understand, easy to visualize how a computer model might capture such a stimulus network? The mind is just a collection of tape recordings that play themselves back on demand and the right training and sequence of inputs triggers a sequence of recordings that result in Mozart and General Relativity.
The major missing element has been the inside of the cells and neurons themselves. Until recently the interior computational activity of cells was well beyond human imagination or comprehension. Absent an intuition for the mediating complexities that a cells interior genetic machinery almost certainly plays in human cognition, it’s understandable that sweeping simplistic generalizations about how such machinery worked are built into our most advanced ideas about simulating intelligence today. Fortunately, some brilliant folks have recently been using computers to illuminate the amazing computational complexity of cell interiors.
Having watched this video, tell me that you still believe that the interior genetic machinery of cells plays no intimate role in mediating thought? Long before animals had brains their cells had nuclei, which are each very powerful computers. The amount of computing that the interior of every cell performs in real-time is mind boggling when you try to visualize it. The interior computer of a cell is deeply and directly connected to its exterior signalling network. Can the computational behavior of a cell interiors contribution to thought be reduced to a single weighted logistic function? Is “memory” really as simple as the surface count of the number of glutamate receptors on the tip of an axon?
Hidden in our simplistic models of neural networks are some very human conceits that reveal the models we make are probably really just marionette contrivances of our own innate ability to code. In other words a neural network may not be a model for artificial intelligence at all, just a new tool for human programming that we have attributed with human cognition properties we manually programmed into them. Hidden in ALL neural networking models is an essential element of human massaging to make them work. In essence neural networks work because we hand tune them to work and the model framework simply hides the fact that we hand code them just as surely as if we had coded the solution in C++ ourselves.
“Initial parameters of neural networks are as important as the network architecture and initialization has been thoroughly studied in the past.”
…wait what do they mean “initial parameters”? Neural networks don’t work at all or learn very slowly if a human being doesn’t carefully CHOOSE the initial weights for the network. Sometimes the human being chooses the weights indirectly by choosing a randomization function or a distribution function, but every weight is still initially chosen by a human mind which consciously discards any choices that don’t result in a working simulation. Who chose the weights for your brain? Technically the cells that constructed your brain did/do and they started out as a single cell that apparently knew how to weight your brains initial neural network correctly to compute you. If your cells choose your neural networks initial weighting… why do our models of neural networks assume that they stop adjusting those weights after initialization? Where does the “noise” that represents these initial weights come from? So there is a ghost in the machine… our artificial neural networks only work because a human coder imparts their bias on the initial weights and connectivity of their artificial neural network such that it eventually solves the problem the human coder wants it to solve. Biological neural networks work because a “magic” cellular programmer biases our initial weights (and probably all thoughts) to solve the problem it wants the brain to solve. If human minds are necessary to configure the neural network and choose it’s initial state in order for it to solve a problem, then is it really solving any problem WE didn’t manually program it to solve?
Our most advanced understanding and models for human “intelligence” are really just reflections of ourselves programming.
I assert that in the fullness of time we will conclude that brains and nervous systems are the wiring machinery our minds use to keep us alive and functioning as organisms but are not the computing mechanism that gives rise to “consciousness” or our ability to code, engineer, write great works of literature or invent mathematical explanations for the fabric of the Universe. I suspect that we will find those elements of intelligence inside the cell where they manifested themselves the moment the first cell replicated and began to adapt to its environment. I don’t believe that a neural network of any complexity ever has “revolutionary ideas” without another major influence constantly challenging it to make illogical leaps to find entirely new approaches to the problems it chooses to overcome.