Fooling Ourselves with AI

Posted on June 16, 2017 by TheSaint in Artifical Life, Things that NEED to be said

I’ve worked with neural networks a lot over the years and the thing that keeps popping out of them is that they are NOT the basis of human intelligence or sentience.  Consciousness, as we know it, cannot emerge from the primitive models of neural networks that we play with on computers today.  They’re very good at recognizing patterns and are very informative about how animal motor skills may be trained but they can’t learn to think or have ideas of their own.  Humans like to pretend that consciousness as we experience it is uniquely human and that it popped into existence some 40,000 years ago with the emergence of human tools and cave drawings.  One can hardly blame creationists for their skepticism of evolutionary theory with this common view of the human condition.  As a scientist I would  observe that nothing as complex and nuanced as intelligence can be the product of very rapid evolutionary phenomena.  I have written may articles about my skepticism of modern Artificial Intelligence hype.  I’ve heard extremely intelligent and ostensibly knowledgeable authorities on the subject claim that we are on the verge of experiencing a Skynet like revolution in Artificial Intelligence that will wipe out all jobs and maybe the human species.  In reality I have seen zero indication that any of the AI toys currently being played with by our most advanced AI researchers are in any danger of “waking up” and taking over from us.

One of the most interesting properties of human intelligence is our ability to “code” which can be said to be a vastly  more advanced version of our ability to make tools.  This property of tool making and coding is very sophisticated and cannot be rationally explained by a recent genetic fluke or even the slow plodding evolution of animal nervous systems.  I’ve said before that human minds are not MADE by human minds, they are constructed by witless cells.  It is CELLS that exhibit the closest analog to self-programming and self-design that we actually observe in nature.  Before delving into the subject of the genetic basis for intelligence, it’s probably worth discussing the evolutionary purpose and nature of neurons first.  Neurons are an interesting combination of sensor and motion control system.  Our earliest examples of neurons find them near the mouths of simple organisms and as sensors for locating and collecting food in a liquid medium.  Single celled organisms were able to propel themselves and engage in complex motion and feeding behavior without neurons but as cells began to group themselves into organisms, neurons emerged almost immediately as a means of communication and coordination.

Here we see an Amoeba, a single celled organism, sensing and consuming paramecium another single celled organism capable of rapid propulsion and apparently even the systematic ability to try to escape consumption by the amoeba.  One might observe that these cells are already exhibiting neuron like behavior in their sensory and motion control properties.  To engage in this kind of motion they obviously also need a pretty controlled source of energy production and management.  We find neurons in “predatory” organisms that hunt and eat other organisms.  Chlorophyll based organisms like algae don’t engage in these behaviors presumably because they can eat “sun light” and gases that can be captured without engaging in any complex behavior.

Jellyfish and their near relatives are perhaps the most interesting primitive organism that makes maximum use of a primitive nervous system.

http://animals.mom.me/jellyfish-muscles-11331.html

“All jellyfish have a ring of muscle that encircles the bottom of the bell, which is the main component of the jellyfish anatomy. The bell is hollow and open-ended, allowing it to fill with water. The muscles around the bell contract, squeezing out the water and propelling the jellyfish forward, upward or downward, depending on the position of the bell at the time of compression.”

Jellyfish are currently believed to be the world’s oldest example of an “animal”.  Cells specializing in the role of signaling sensors appear to have evolved as an adaptation of multi-cellular animals needing to engage in complex coordinated movement in response to sensory input.  Now the question is… do Jellyfish nervous systems need to be trained or are they born pre-trained genetically?  If they weren’t born pre-trained, how they learned may be fairly straight forward.  They taste something edible and they send a signal which triggers global motion in the direction of the food source.  If you were a jellyfish with tentacles drifting all around you and you got conflicting signals from different directions, focusing your motion in the direction of the greatest number of food signals would probably be a reasonable response to maximize calorie consumption.  This behavior might involve a relatively short-term version of memory constituting a pretty basic neural network characterized by immediate stimulus and an automated, genetically programmed response.

Box Jellyfish Actively Hunt Fish, Research Finds

Here’s the important point.  It would seem likely that long before complex animals evolved nervous systems that could “learn” and “remember” complex patterns from experience, the earliest organisms with nervous systems probably relied on genetically programmed behaviors.  Even if the organisms neurons directly linked to and triggered muscular reflex responses, the configuration of neurons and behavior was genetically determined.  One might wildly extrapolate to assert that over millions of years of evolution, really complex muscular responses to really complex neural configurations emerged somehow giving rise to neural networks that learn language, compose poetry and invent Special Relativity.  Does that sound like an implausible leap?  It does to me.   In a neural network “memory” is represented by the weighting of static action-potential functions within each neuron.  Only recently have more dynamic approaches to neural networking included the ability for the network to grow or change it’s own connectivity.  The collective weight settings for each neuron in the network constitutes the networks entire “memory” of anything.

It used to be a popular belief that our memories were an “electrical state” contained in our neurons and not necessarily encoded in any persistent cellular data structures.

http://www.human-memory.net/processes_storage.html

“After consolidation, long-term memories are stored throughout the brain as groups of neurons that are primed to fire together in the same pattern that created the original experience, and each component of a memory is stored in the brain area that initiated it”

How exactly does that work?   According to the neuroscience literature, the number of neuroreceptors physically installed at the synapse is part of how memories are represented.

“Neuroreceptors, including the AMPA receptor, are 3D cylinder-shaped protein complexes about 8 nanometers in size that are made of up tens of thousands of atoms. They physically move around the neuron and are mechanically installed via a process called “receptor trafficking”.”

It all sounds very robotic and automated doesn’t it?  Easy to understand, easy to visualize how a computer model might capture such a stimulus network?  The mind is just a collection of tape recordings that play themselves back on demand and the right training and sequence of inputs triggers a sequence of recordings that result in Mozart and General Relativity.

The major missing element has been the inside of the cells and neurons themselves.  Until recently the interior computational activity of cells was well beyond human imagination or comprehension.  Absent an intuition for the mediating complexities that a cells interior genetic machinery almost certainly plays in human cognition, it’s understandable that sweeping simplistic generalizations about how such machinery worked are built into our most advanced ideas about simulating intelligence today.  Fortunately, some brilliant folks have recently been using computers to illuminate the amazing computational complexity of cell interiors.

Having watched this video, tell me that you still believe that the interior genetic machinery of cells plays no intimate role in mediating thought?  Long before animals had brains their cells had nuclei, which are each very powerful computers.  The amount of computing that the interior of every cell performs in real-time is mind boggling when you try to visualize it.  The interior computer of a cell is deeply and directly connected to its exterior signalling network.  Can the computational behavior of a cell interiors contribution to thought be reduced to a single weighted logistic function?   Is “memory” really as simple as the surface count of the number of glutamate receptors on the tip of an axon?

Hidden in our simplistic models of neural networks are some very human conceits that reveal the models we make are probably really just marionette contrivances of our own innate ability to code.  In other words a neural network may not be a model for artificial intelligence at all, just a new tool for human programming that we have attributed with human cognition properties we manually programmed into them.  Hidden in ALL neural networking models is an essential element of human massaging to make them work.  In essence neural networks work because we hand tune them to work and the model framework simply hides the fact that we hand code them just as surely as if we had coded the solution in C++ ourselves.

https://www.quora.com/Why-do-initial-weights-and-bias-in-a-neural-network-affect-so-heavily-the-speed-of-convergence

“Initial parameters of neural networks are as important as the network architecture and initialization has been thoroughly studied in the past.”

…wait what do they mean “initial parameters”?  Neural networks don’t work at all or learn very slowly if a human being doesn’t carefully CHOOSE the initial weights for the network.  Sometimes the human being chooses the weights indirectly by choosing a randomization function or a distribution function, but every weight is still initially chosen by a human mind which consciously discards any choices that don’t result in a working simulation.  Who chose the weights for your brain?  Technically the cells that constructed your brain did/do and they started out as a single cell that apparently knew how to weight your brains initial neural network correctly to compute you.  If your cells choose your neural networks initial weighting… why do our models of neural networks assume that they stop adjusting those weights after initialization?  Where does the “noise” that represents these initial weights come from?  So there is a ghost in the machine… our artificial neural networks only work because a human coder imparts their bias on the initial weights and connectivity of their artificial neural network such that it eventually solves the problem the human coder wants it to solve.  Biological neural networks work because a “magic” cellular programmer biases our initial weights (and probably all thoughts) to solve the problem it wants the brain to solve.  If human minds are necessary to configure the neural network and choose it’s initial state in order for it to solve a problem, then is it really solving any problem WE didn’t manually program it to solve?

Our most advanced understanding and models for human “intelligence” are really just reflections of ourselves programming.  

I assert that in the fullness of time we will conclude that brains and nervous systems are the wiring machinery our minds use to keep us alive and functioning as organisms but are not the computing mechanism that gives rise to “consciousness” or our ability to code, engineer, write great works of literature or invent mathematical explanations for the fabric of the Universe.  I suspect that we will find those elements of intelligence inside the cell where they manifested themselves the moment the first cell replicated and began to adapt to its environment.  I don’t believe that a neural network of any complexity ever has “revolutionary ideas” without another major influence constantly challenging it to make illogical leaps to find entirely new approaches to the problems it chooses to overcome.

Comments

12 Comments

    • Yes I saw this article… it’s kind of a “no doh” observation if you’ve done any work with graph computing because complex data structures with many associations are always represented by higher dimensional structures that cluster similar complex ideas. For example how many relationships are there between the words “prime” and “dime”? There are many seemingly unimportant associations between these words that we seem to be able to recognize instantly… as though our brains have already stored them in a way that associates them along some or all of these abstract connections. This is what you would expect to see in the structure of a neural network that has clustered associations between data in many dimensions.

      There are a number of fun and interesting brain exercises to try on yourself. Try to name two words that have nothing in common then find a friend who can prove you wrong. Asteroid and love generated the longest debate I’ve experienced with this game (they’re both spelled with an ‘o’). Did you find it much harder to think of two things that have no relationship to one another than two things that had some relationship? If your brain is a neural network… HOW does it come up with two unconnected ideas? It SHOULD be impossible right?

      Now here’s a deep one. Try to come up with an original idea that you cannot derive from previous experience or knowledge. Here’s a starting point. Describe what blue smells like without referring to any blue things.

      These are all exercises in short-circuiting your neural network to feel how your mind responds when it can’t rely on its higher dimensional associations to compute. If we have the “conscious” ability to override or deliberately create a new circuit to perform these tasks… where in the brain is the circuit that does that and what trained it? Do you know pig-Latin? How long would it take you to deliberately train your mind to become proficient at it? What neural circuit read these words and made the choice to try it?
      http://www.wikihow.com/Speak-Pig-Latin

      Don’t think about a pink gorilla while you’re trying it either…

      • I was profoundly influenced by Marvin Minsky’s seminal book on AI “Society of the Mind”. The book proposed a model for intelligence that in many respects is far more plausible and seems to explain many more mental phenomena than modern neural network ideas do. One important insight from Society of the Mind is that the brain behaves like it is full of many different independent conscious entities that are each obsessed with a different function. Eating, sleeping, grooming, etc. These “entities” compete for dominance based on sensory input such as your stomach telling you you’re hungry enabling the “appetite” entity to shout everyone else down for control of the body. When you think about this model for consciousness you quickly realize that a neural network simply can’t do that. For one thing, they process data in lock-step order… parts of the neural network don’t break off to run independently at their own clock rate, then turn around and start influencing the rest of the network to comply with their own interests. “Stop listening to music and reacting to it and EAT!” These “entities” voting or competing with one another collectively give rise to consciousness and self-motivated behavior.

        These are the PROGRAMMERS of the mind that we’re looking for. If you’ve read my previous blogs on the subject you’ll note that I suspect the mysterious and unexplained function of glial cells as playing a vital role in “programming” the neural network component of the brain.

        • I haven’t read your previous blog posts yet, just discovered this blog via Twitter. But I have agreed with your position WRT glial cells for a while. (Some links may be broken, these are old posts.)

          Another issue with (current, AFAIK) neural network models is that they lack a proper emulation of the analog nature of neural activity.

          But I would disagree that it’s the interior of the neual(/glial) cells that’s the primary locus of information processing that (presumably) leads to consciousness, etc. Rather, I would place most of that calculation in the cell membrane of the dendritic tree.

          Certainly the soma can be in a variety of states depending on its reaction to previous activity. But in most neurons, especially pyramidal cells, I’d expect the majority of calculation to already be complete by the time the incoming (processed) signalling reaches the major dendritic branches. Note also the humongous area of membrane available for calculation given ability to integrate incoming dendritic signals on sub-millisecond scales.

          • See you’re making me crib material from another blog I’m working on. It turns out that we have a little academic validation for our faith in the role of Glial cells in defining consciousness.

            https://www.scientificamerican.com/article/the-root-of-thought-what/

            Now this article suggests that we can both be right in this argument because it suggests that the mediating messaging signal for glial cells are direct contact inter-cellular calcium ion exchanges. This would NOT be an example of synaptic communication but an older form of chemical diffusion communication mediated directly by the cells genes. Sure the glial cells need long-distance electrical signaling, but they may do their most important communicating via cell wall diffusion with their immediate neighbors. If you just say that a bigger neural network controls the smaller one, you still haven’t answered the question… how did the glial cells build the brain and connect themselves correctly to begin with BEFORE they were connected correctly?

            You’re kind of insisting on defending the position that there is genetic machine capable of assembling a working human mind out of trillions of cells with NO neural network involved from scratch that just shuts off and plays no further role in human thought having fulfilled it’s only function of BUILDING BRAINS. I would be willing to take the bet that it turns out that the BRAIN BUILDING force remains in charge even after the brain is assembled. I assert that the brain is just there so that the real source of “consciousness” can puppet the human body around successfully.

          • Here’s another interesting fun fact about glial cells.

            “How do glial cells communicate?
            Brain Cells that Communicate without Electricity: Calcium Waves in Glia. Glia are brain cells that cannot generate electrical impulses. … Probing the brain with electrodes, the way neuroscientists do to understand neuronal communication, is useless to intercept glial communications.”

  1. might be right about computation happening within cells.

    But just because that might be shown to be important someday (for now it is just a guess concerning cognition and consciousness) is no reason to discount the very well established role of interconnections between cells in just about every aspect of cognition imaginable.

    Almost every paper in the history of neuroscience is about networks of cooperating neurons to represent something: memory formation, visual perception, oscillations, ERPs, etc. etc.)

    Maybe I am misunderstanding something, but to claim that neurons do not cooperate to represent information is very wrong.

    It also isn’t clear why a failure of AI has any bearing on the actual informational purpose of neural networks in the brain.

    • Taking some liberties with what I said Jesse. People write and believe lots of false things, volume doesn’t make them more legitimate. I did not claim that neurons don’t cooperate to represent information. I claimed that our models of neural networks are not sufficient to explain human intelligence. They can’t invent or code. Something MAJOR is missing. Since we DO NOT have any working examples of human like artificial intelligence, the fact that everything we believe to be true has FAILED to produce success in modeling intelligence is evidence that they are wrong or at least missing something essential.

      90% of our brain is composed of neurons that DO NOT participate in neural networking. Is it really out of line to suggest that they are playing an important role in cognition that we do not understand and probably NEED to understand to solve the problem? Nobody ever became a “Famous” scientist for agreeing with the status quo.

  2. Found your piece via Twitter. As a behaviorist (behavior analyst), I would tend to agree with much you say here. While physiology is beyond my expertise, what I can offer is what the science of behavior tells us. Respondent and operant conditioning is central to how we learn and “process” stimuli. You simply can’t look at any behavior outside the context of the organism’s phyogenic and ontogenic history.

    The human brain has an amazing capacity for learning through association. I do this daily in my clinical work with children who have no language or “verbal behavior”. Yet through intensive training, relying on a number of techniques, we can help establish stimulus control in the environment, such that associations can be made between child, items/activities and the verbal behavior with others. This starts with basic motivating operations (chip, candy, water, etc.”, and slowly leads to more advance “thought”, if you will, such as distant or unseen items and concepts.

    I’ve always thought that AI should start where everything else starts – with operant learning. Thus, a form of cultural or behavioral selection occurs, by which behaviors are rewarded or punished according to benefits/harms they bring to the organism/device. The complexity beneath the skin – the physiology – must allow for this process to unfold. But you must have a basic desire – something like food/water/air/sex, then a mechanism for trying to acquire it, then a mechanism by which the behavior is reinforced or punished according to the degree to which it achieves homeostasis with the desire (an equilibrium between deprivation and satiation).

    • You sir are still describing a very neural network view of behavior. Sure we can reward a rat with cheese for learning to solve a maze, but HOW did it figure out the maze??? One of the most interesting studies I have read on the subject found that randomly rewarding rats trained them faster than consistently rewarding them for desired behavior. Why do RANDOM rewards train better than consistent ones? These are the AI problems that neural networks seem to provide poor models for. Have we ever heard of a neural network that learns faster when we lie to it occasionally? Have you ever tried inconsistently rewarding/punishing people to see if they learn faster?

      I’ve tried it and it works. People remain more engaged and better focused on problems they can almost but not quite solve. An indirect correlation is more “fascinating” to try to learn than a direct one. A human when presented with a direct correlation will stop doing something for a reward when they are satiated. A human when presented with a puzzle will keep trying to solve it independently of reward satiation. We call them “video games”. 🙂

Pingbacks

  1. Fooling Ourselves with AI – Full-Stack Feed

Leave a Reply

Follow

Get every new post delivered to your Inbox

Join other followers:

%d bloggers like this: