Computing With Noise

Posted on June 30, 2017 by TheSaint in Artifical Life, Things that NEED to be said

In my last few blog articles I attempted to explain how our experience of sentient consciousness, although complex, probably arises from the quantum noise of water molecule interactions in the cellular cytotic medium.  That’s a big leap and I would like to expand on that idea in more detail but I think some foundation ideas need to be presented before I can put that narrative together.  In a previous article on artificial life titled: “Life Needs Noise” I attempted to illustrate just how different biological computing is from man-made computing.

Life Needs “Noise”

Image result for random noiseImage result for perlin noiseI want to make the distinction between truly RANDOM noise and noise that computes as clearly as I possibly can in order to illustrate important distinctions in the way we think about the idea of randomness.   Truly random noise has no predictable structure.  Any sample value in a random noise signal is equally likely regardless of when or where the signal is sampled.  Structured noise may show the same aggregate statistical properties of random noise but contain some predictable structure.  Both of these images have the same number of white and black pixels.  Both have the same distribution of black and white pixels above a certain scale but below that scale, one of them (right), exhibits structure while the other does not.  It can often be very difficult to distinguish noise from information, in fact there is no mathematical way to prove that a given sample of data is actually random.  The important point is that quantum noise and thermal turbulence isn’t necessarily truly RANDOM, it’s just unpredictable to us.  In quantum physics there is an important subtle difference between events that are genuinely random and events that we can’t predict because we can never know the initial state of the system we’re trying to simulate.  We often call turbulent systems “random” because turbulence is very hard to compute and it’s usually impossible to know the exact initial state of a turbulent system such that we can solve the Navier-Stokes equations to model it.

Image result for 2D turbulenceThe images to the left are of turbulent systems which look very random but are not.  The Navier-Stokes equations themselves are UNPROVEN to model all turbulent systems we encounter in nature.  We assume that they are correct but the human species has never been able to prove that they are a correct model of turbulence.  Even if the Navier-Stokes formula is correct, we can’t actually compute a turbulent system accurately with a man-made computer because our computers lack the computational precision to deal with the extreme range of numeric precision required to model a complex turbulent system.  Whenever we read a lot of scientific and biological literature the term “random” is thrown around very casually and it’s important to realize that most of the time what the author means when using the term random is; “I don’t know” or “Something happened that I don’t understand”.

Now that we’ve made a distinction between truly random data (which we can never prove is actually random), structured data that seems random only because we are ignorant of the initial state of the system  and possibly the bounding conditions that produced it, and data that appears random only because we don’t understand how it was produced, we can talk about what it means to compute with noise or random data.  Structured noise has many valuable computing properties that we rely on for many common computing tasks we take for granted all the time.  Machines like this potato harvester rely on noise (vibration) to compute potatoes out of rocky mixed soil.  Imagine trying to write a computer vision program that did not rely on noise to try to achieve this result more efficiently.  My favorite example of noise based computing is of course the change sorting machine.

If you’re trying to visualize how a cell systematically shuttles complex organic molecules through its cytoplasm directing them to their correct destinations just imagine that the cells rigid internal structures combined with quantum and thermal jostling of the surrounding water molecules ends up reliably jiggle-computing all the various molecules into their correct positions and destinations.  The change sorter is a single dimensional version of the cells three dimensional interior and the coins are rigid two dimensional substitutes for the cells highly flexible three dimensional organic molecules.  Without the structure imposed by the physical change sorter the presence of vibration or noise would just randomly jostle the coins around, but in the presence of the correct structure, the noise powers the sorting process which results in a correct computation of the value of all of the coins being sorted.  Note that not any  noise would do the job.  If the centrifuge spun too fast or too slow, the machine would not work.   Think of it as a device that sums numbers by shaking it correctly.   You drop the numbers to be summed in the top, give it a spin and out comes the sum.  The numbers have to have physical properties for this to work, just as living systems need to compute physical life not abstract electrical results.  Also note that the change sorter requires an external source of power because friction overcomes the natural quantum jiggling of large scale objects.  Everything just jiggles for free at the quantum level.

Related image

Since a cell it topologically a SPHERE, when it is in a confined space pressed against other cells on all sides, the RATE at which ions move through these channels give our (hypothetical) cell a means of computing digits of Pi.  The INTERIOR of a cell can genetically control its ion production.

Another favorite example of computing with truly random noise is how to compute pi using a shotgun.  You simply draw a square on a target and circumscribe a circle in it, then fire a shotgun at it.  Count the pellets that land inside the circle and divide by the number of pellets that land outside the circle in the square.  The ratio is an approximation of Pi/4.  If I wanted to build a liquid Pi computer I might try confining a spherical cell membrane to a cube and doping the liquid interior of the cell with a little salt.  Then timing  how long it took the cell interior to reach electrical equilibrium with its exterior as Na+ atoms cross the membrane via Na+ ion pores.  The time it took for the charge to reach equilibrium would give me an estimate of Pi.  Now I just need a mechanism to control ion production and keep count…

An animation showing a strand of DNA passing through a nanopore

A nanopore transcribing a strand of DNA into a an electrical signal on the surface of a cellular membrane. Sort of like executing code….

Stated more bluntly, combinations of different ion channels in proximity to one another on the surface of a cellular membrane function as transistors and the surface of the cell may literally  electrically compute like a Turing machine.  Proteins, RNA and DNA are the data-structures that enable the cell to track its computational results and mitochondria are the batteries that enable a cell to power itself and modulate its ion production.

Related image

Three dimensional electrical computing surfaces folded like brain tissue inside the cell… the proteins, RNA, DNA and enzymes are data structures passed as parameters to biological functions

Creationist meme

Although anybody can examine a modern cell and see a powerful highly regulated computer, the question remains, how does one just form out of nothing?  One of the favorite (and reasonable) creationist arguments against evolution is that they can’t visualize a way for order and complexity to evolve from chaos.  The argument goes that if scientists can’t explain how complex machines evolve out of chaos, therefore God.  If this statement is true then quantum noise is God designing.  Let’s take a closer look at how our change sorter works without the presence of electricity to confuse us about it’s man-made properties.  In the example of the inert plastic change sorter below, the absence of noise means that the guy demonstrating his rigid change sorter has to tap and jostle it a bit manually to get it to work.  This manual jostling would not be necessary at molecular scales because thermal and quantum noise acting on the coins would perform this same function automatically.

Here is an example of an organic “change sorting” machine in action with NO POWER.

How does this mysterious magical computing work?  Water is doing the sorting for us via capillary action.

” If the diameter of the tube is sufficiently small, then the combination of surface tension (which is caused by cohesion within the liquid) and adhesive forces between the liquid and container wall act to propel the liquid.[1]

Wait.. how does this make the fluid move and sort of it’s own volition?  Click deep enough into the explanation and the answer is “quantum water magic” does the computing!

Dispersive forces are a consequence of statistical quantum mechanics

Image result for transistorSo liquid water, just sitting passively in the correct static environment will sort whatever stuff is suspended in it.  Take a minute to let that idea soak in.  Water sorts things by just sitting there.  If the boundaries of the rigid container the water is sitting in are shaped like a plastic coin sorter, it just sits there and sorts whatever particles flow into it.  Water computes when it is doing nothing.  So… what is a computer but a really complicated electron sorter?

Here’s a more complex example of a physical machine that performs calculations with marbles.  The power for the machine comes from the kinetic energy that the speaker imparted to the marbles when he lifted them against gravity to the top of the device.  At a quantum molecular level, a machine like this might require no external power, thermal jostling of water sitting in the machine would eventually jostle the marbles suspended in the fluid into their calculated positions.  I don’t have a convenient way to illustrate this kind of computing at a macro scale but here’s a related idea that relies on simple passive water evaporation to power an engine. Note that water evaporation is the result of the same quantum level thermal chaos that we are discussing.

“Technically the water is not turning into a gas, but random movement of the surface molecules allows some of them enough energy to escape from the surface into the air.”

“at the surface of the liquid, lone molecules may end up getting enough kinetic energy to break free due to the random nature of molecular motion at basically any temperature”

There’s that scientific word for magic again… “Random” power from somewhere makes this device compute… hmmm… Now here is where the creationists get stuck.  At classical scales, in the presence of gravity we never see complex machines just assemble themselves because gravity tends to confine physics interactions between large objects to 2D space.  There is no way to build a complex machine confined to 2D space because you can’t maneuver the parts together or pin them to anything.  Gravity dominates the forces we experience at this scale which exerts a constant attraction to everything and friction prevents them from moving around on their own.  This was one of the huge mysteries of quantum mechanics in the early days, we couldn’t imagine how it was possible for electrons to orbit an atom without falling into it.  It took us a while to figure out that electromagnetism exerted a much stranger force on electrons at the quantum level such that they were attracted to discreet orbitals around atoms.  If you imagined an atomic nucleus the size of a pool ball and an electron the size of a golf-ball.  Try to push them together and you might initially feel some mutual repulsion but with enough force the golf ball would drop into suspended orbit around the pool ball.  Push a little harder and it would cross another threshold and orbit closer.  Harder still and closer again… Release the golf ball and it would “randomly” emit a flash of light and jump to a more distant orbital position.  We never experience weird forces like this at classical scales except perhaps when playing with magnets, so it’s understandable that classical minds from the 19th century could not visualize a mechanism by which complex machines could just POP into existence out of nothing and apparently in defiance of the second law of thermodynamics.

Quantum mechanics allows quantum scale systems to magically produce energy as long as it’s given back eventually.  Productive energy can also easily be injected into quantum systems by subtle means such as sunlight heating them up in defiance of their natural desire to cool, gravity imparting energy to them in the past that drifts out of them later and chemical energy that may have been stored in them under previous pressure and thermal conditions getting released later in the presence of a catalyst.  The ease of injecting energy into quantum systems can often result in them mysteriously appearing to generate it.

Now let’s explore EXACTLY how water can invent complex computing machines out of nothing.

  1. Water is constantly vibrating and shaking at the quantum level, it will sort particles and generate power simply by sitting in a container and evaporating.
  2. Doped with a little salt, water becomes a semi-conductor and becomes better a suspending particles in 3D space beyond the influence of gravity.
  3. Water is a universal solvent, it will tend to temporarily break any material it touches down into suspended atoms and molecules.

In other words if you just drop the right materials into water and let it sit, water alone will dissolve the materials, sort them and perform random Lego experiments on their physical configurations before tearing them apart again.  In three dimensional space, water is free to assemble molecules in complex configurations that would not occur naturally at macroscopic scales in the presence of gravity.  Machines requiring complex maneuvering to assemble are possible in three dimensions.  Going a step further, consider that water really acquires it’s suspension and semi-conductor properties when it is combined with salt… that’s NaCl and/or KCl.  Add salt to water and it becomes a semiconductor and it acquires the ability to electrically compute.

Remind me again what the essential molecular components are involved in neural computing?

“2. Concentration gradient (difference in distribution of ions between the inside and the outside of the
membrane): During the resting potential, a difference in the distribution of ions is established with
sodium (Na+) 10 times more concentrated outside the membrane than inside and potassium (K+) 20
times more concentrated inside than outside.
Because the body has far more sodium ions than potassium ions, the concentration of sodium ions
outside is greater then the potassium ions inside making the outside more positively charged than the
inside.
3. The neuron membrane has selective permeability, which allows some molecules to pass freely
(e.g., water, carbon dioxide, oxygen, etc.).
4. During the resting potential, K+ and (chloride) Cl- gates (channels) remain open along the
membrane, which allows both ions to pass through; Na+ gates remain closed restricting the passage of
Na+ ions.
5. Sodium-potassium pump: Protein mechanism found along the neuron membrane which transports
3 Na+ ions outside of the cell while also drawing 2 K+ ions into the cell; this is an active transport
mechanism (requires energy (ATP) to function).
6. Electrical gradient (difference in positive and negative charges across the membrane): Due to the
negative charge inside the membrane, K+ (a positively-charged ion) is attracted into the neuron; Na+ is
also attracted to the negative charge, but remains mostly outside of the neuron due to the sodiumpotassium
pump and the closing of sodium gates.
7. The advantage of the resting potential is to allow the neuron to respond quickly to a stimulus.”

NA+ and K+ ions… what a shocking coincidence?  The most basic, primal, fundamental, simplest possible chemical ions imaginable are coincidentally fundamental to human thought and consciousness!  It would probably be redundant to observe that the exact same ions are also responsible for METABOLISM (our ability to turn food into energy).  So the same chemical machinery that enables us to eat and power ourselves is the machinery that enables computation and we find it at the heart of highly evolved human thought a few billion years of evolution later.  In some very primal sense one might observe that water has always been conscious.  It was just a boiling soup of  bobbing Lego parts, little random ideas, experiments and innovations that kept learning and improving on itself until US.

There are many fascinating computing elements that make up the interior of cells, one of the most intriguing is the mitochondria which at one time may have been an independent parasitic thermophile organism that symbiotically merged with modern eukaryotic cells to provide the power that enabled them to evolve to much higher levels of computational complexity.  It’s valuable to note that simple prokaryotic cells that lack the complex many folded membranes that make of the internal computing machinery of eukaryotes and the internal mitochondria organelles to generate power perform the same ATP energy production on their surfaces.  A simple prokaryotic bacterium is a floating battery with an exterior membrane that can compute a little bit.

Home made mitochondria…

Now the big reveal, if you’re not already there with me.  Consciousness in the form of inspiration, insight and innovation probably boils up from deep within the molecular machinery of our cells into highly structured complex ideas.  The genetic interior of our cells is like my change sorting machine, turning random chaos into structured computational results and deliberate systematic mutations designed to find new improved self-designs.

Imagine a computer program that started out generating random binary numbers and executing them as programs.  Any program that “crashes” is eliminated while any program that runs forever reproducing, survives.  At first nearly all of it’s results would crash horribly.  Eventually a program that only generated binary numbers corresponding to legitimate assembly instructions would emerge, thereby dramatically improving the quality of its mutations by confining them exclusively to viable machine instructions.  Eventually a program that only generated non-crashing sequences of instructions would emerge, further accelerating the programs efficiency in searching for better surviving solutions.  Each generation would be smarter and dramatically faster at searching a vast permutation space of possibilities for viable programs.  Each new generation would improve it’s evolutionary speed by having the domain of its random search dramatically narrowed to increasingly favor only beneficial or viable new “ideas”.  Natural selection in the form of crashing would kill ALL programs that did not have this property of generating random binary attempts at executable code.  The best programs would make copies of themselves and make slight changes to them to increase the efficiency of their search for better solutions by searching the solution space like themselves first (AKA “sex”).  Sex is the ultimate proof that cells engage in deliberate systematic mutation… that’s WHAT sex is!  When all programs that crash are excluded, all that is left are programs that self-perpetuate and only the fastest and best survive in the presence of resource competition.

Most naturally occurring mutations in modern organisms are selected from a very narrow safe range of possibilities that almost always result in a harmless impact on the survival of an organism.  Occasionally a mutation is beneficial.  Cells can accelerate their mutation rates and choose their mutations based on various environmental conditions.  It appears that the structure of our brain and immune system genomes are actively shaped by mutation even through adulthood.  Thus mutations are random within a narrowly bounded range of choices that leaves room for quantum noise to only try mutations that are likely to be harmless or beneficial.  The cumulative computational effect of these incrementally better mutations is a process of selective self-design.  Living systems are constantly redesigning themselves to be better survivors.

The ability to invent and systematically self-code is a fundamental intrinsic property of life.  It began with quantum noise and water, increased in complexity to produce simple energy processing organisms, then computing cells and ultimately to become creative consciousness as we experience it now.  In my next blog we will take a closer look at the interior of a cell and how selective mutation appears to be at work improving the design and function of our cells and most likely governing the experience we know of as consciousness.  The important observation is that the interior computational power of a single cell is vastly greater than we presently recognize.

http://www.nbcnews.com/science/human-brain-may-be-even-more-powerful-computer-thought-8c11497831

“Suddenly, it’s as if the processing power of the brain is much greater than we had originally thought,” study lead author Spencer Smith, a neuroscientist at the University of North Carolina at Chapel Hill, said in a statement.

Image result for dna transcription

A single gene may be transcribed hundreds of times in parallel, so the internal computing parallelism of a cell is phenomenal, executing millions of literal threads concurrently.

Boy are these geniuses are in for a shock when they figure out where the rest of the brains computing machinery really is!

I tried to find something in the scientific literature that gave me an estimate of the number of ion channels in a living cell.  I couldn’t find anything concrete.  I hypothesize that we will find they are extremely numerous and ultimately analogous to fundamental biological computing elements.  We may be able to estimate the computing power of a cell based on the count of these pores, just as we can estimate the computer power of a CPU by how many billions of transistors it contains.  Failing that we can ask how much code a cell may be executing.

If we just used the number of base pairs in our DNA that would be 3 billion.  Each base pair is basically 2 bits because it can contain any one of four nucleotides or about 750 MB of code, not including all the state information that gets passed on by mom’s egg cell.  That’s analogous to roughly 3 million lines of code or about the size of the Linux Kernel.

Suffice it to say that a back of the napkin estimate suggests that computationally a single cell may give our most powerful supercomputers a run for their money and the mechanisms they use to communicate with one another in a brain are vastly more complex than our simplistic ideas about neural networks capture.  Furthermore, the secret sauce to consciousness is probably floating inside and between our (glial) cells driving all of their internal innovation as well our external ability to code and write very creative, pseudo-scientific blog articles.

 

 

 

Comments

5 Comments

  1. What you say makes sense until you start veering into talking about body-wide brains. Animals with more cells aren’t smarter than us. Chopping off an arm doesn’t make someone dumber; that arm didn’t supply anything that benefits intelligence in any way (other than a tool for the brain to defend itself and remotely learn about the world). All the nutrient/info sorting your arm muscles do are to benefit the core movement purpose of the arm. After losing an arm people get phantom pains and can still feel like they’re moving it since the inverse kinematics commands are molded into a significant portion of the brain. Losing an arm allows the brain to eventually remold itself and use that space for other stuff (maybe more accurate IK for other body parts since I think I read that part of the brain is dedicated).

    Seems kinda obvious so I don’t understand your angle.

    • Didn’t veer into any body-wide brain stuff. I was referring to the function of the glial cells that make up 90% of the brain but do not engage in electrical signaling. It’s from an earlier blog article on the subject. I’m asserting that glial cells play a much bigger role in thought than previously imagined.

  2. Your point about real-random versus ignorance-induced-random is well taken; it applies at levels coarser than statistical and quantum mechanics. See here for a classic example of a ‘statistical community’ that’s deterministic if you actually view it closely: https://journals.aps.org/prx/pdf/10.1103/PhysRevX.5.041014. This is perhaps not surprising, since things are deterministic when they get big enough to fall under newtonian mechanics.

    More interesting is your point about turbulence – which is ‘deterministic chaos.’ Deterministic chaos is very interesting because it looks ‘random’ to the ignorant, but can contain insanely precise information about the initial conditions if you know how to analyze the trajectories. We currently don’t know how, but maybe cells do, and use chaos to do things with extreme precision. Some work along these lines was done by Sussman’s students: https://dspace.mit.edu/handle/1721.1/5953; https://dspace.mit.edu/handle/1721.1/7060

    I’ll respond to your points about intracellular computation in a second comment.

  3. A great link a friend sent me on the hidden “structure” of noise. In this case that random walks converge on Gaussian distribution curves.
    http://www.decisionsciencenews.com/2017/06/19/counterintuitive-problem-everyone-room-keeps-giving-dollars-random-others-youll-never-guess-happens-next/

Pingbacks

Leave a Reply

Follow

Get every new post delivered to your Inbox

Join other followers:

%d bloggers like this: