Artificial Ignorance

Posted on April 29, 2017 by TheSaint in Artifical Life

Finally a few minutes to myself to write a blog.   After many months of lying low, the subject of Artificial Intelligence in the media has really been driving me crazy.  The recent hype and click-bait around the dangers of Artificial Intelligence being on the verge of taking over the world and wiping out all employment is a constant assault on my own intelligence and has served as a constant media reminder that the task of replacing human intelligence may be easier than I initially thought given what morons our allegedly greatest scientific minds appear to be on the subject.

*Stephen Hawking circa 2100AD

Here we have the worlds greatest living genius Stephen Hawking boldly proclaiming that;

“The development of full artificial intelligence could spell the end of the human race.”

http://www.bbc.com/news/technology-30290540

Then we have the worlds richest geniuses Elon Musk and Bill Gates making similar dire predictions about the threat of Skynet to human existence and employment.  I know they are really intelligent people, maybe it’s a modern PR ploy for staying relevant with the media by saying this nonsense, but it’s really just nonsense and they probably know it.  The human species is nowhere near approaching anything even remotely resembling human intelligence… not even close.  Yes we are making bigger and bigger calculators that are better and better at identifying complex patterns than we are, but NONE of that is relevant to achieving actual intelligence.  It’s just bigger faster tic-tac-toe.  Even if we had an infinitely fast computer with infinite memory we would still have no idea how to program it to actually be intelligent like a human being.  WE DON’T KNOW HOW OUR MIND WORKS so we sure as hell aren’t close to programming a computer to behave like one.

Almost all the geniuses working on AI at Google and Microsoft and every VC funded AI startup are almost invariably working with neural networks which are just human programmed tools for finding patterns in complex data sets.  Neural networks don’t have any ideas about those patterns, they don’t organize those patterns into complex thoughts, they don’t make observations about those patterns that give them any insight into life, the Universe or anything meaningful, they just spot and categorize patterns.  If they actually succeeded in constructing a neural network so large and complex that it could pass a Turing test (which is plausible), they would still have made ZERO progress towards understanding human intelligence because such a machine would be just as intelligent as a tape recording of an authentic human voice.  Recording and imitating human behavior patterns by essentially creating a giant lookup table of plausible responses to every possible input is not the same as mastering the process of actual human learning and abstract reasoning.  

We may well build computers with so much memory and processing power that it becomes possible to record the entire range of human behaviors long before we actually grasp how human intelligence works.  This is not really progress, just a neat computational parlor trick that will not take any jobs that weren’t already exported to India, China or Mexico.  By the time such a computational feat is possible, India, China and Mexico’s economies will be sufficiently evolved for them to also be happy to export those jobs to Skynet.  Another dire threat to our lives and employment is that autonomous robots imbued with sufficiently versatile humanoid navigation systems will replace us at house keeping, apple picking and garage reorganization… as though anybody living in first world countries will miss those exciting job opportunities as well.

The implication our brightest minds would have us believe is that imitation and recording of menial thoughts and behaviors is equivalent to being intelligent, which tells us more about what they think of us then what they actually know about artificial intelligence.  ACTUAL artificial intelligence must ALSO be able to self-program, to identify and solve abstract problems without being directed to do so, to be able to make observations about the world and draw new conclusions from data or situations that it has not been previously trained to recognize.  Neural networks and today’s’ deep learning systems are not a step in the direction of doing any of these things.   Massive permutation searches for solutions to finite state problems is probably NOT how a brain works.

It’s easy for pointy headed computer scientists like myself to blather on about our deep insights into the nature of AI in the absence of any evidence that ANY of us are actually creating anything really intelligent but there is empirical evidence that everybody is pretty much on the completely wrong track.  Brains aren’t used to construct other human brains, cells do that work.  Since our simulations of neural networks don’t seem to have ideas about how to cope or innovate in the face of untrained knowledge, our model of thought based on these mechanics is probably flawed.  The essence of intelligence may emerge from the machinery inside our cells not from the network of wires they form.

To summarize the blog of an actual smart person who really understands the complexity of computationally simulating a human mind;

https://rbharath.github.io/the-ferocious-complexity-of-the-cell/

We are well over 100 years away from achieving true artificial intelligence.  The best we are likely to hope for is that within 20-30 years we will see a day when we can computationally simulate a complete single cell organism.  Until then, we really only need to worry about losing all of our precious short order cook jobs to ten million dollar androids…

Comments

14 Comments

  1. I don’t think anyone is advocating we are anywhere close to artificially creating actual intelligence. So arguing against it seems like beating a dead horse. Having influential tech people suggest similar stuff is unfortunate, but what’re you gonna do, the PR machinery must never stop.

    A few months ago “Could a Neuroscientist Understand a Microprocessor?” generated some fuzz and put current neuroscience practices in it’s place: http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268

    Is it foreseeable how far our “big and fast tic-tac-toe players” can be pushed with current (non-intelligent) techniques? In the next 20-30 years maybe? Could be far enough to make the efforts of pursuing “actual artificial intelligence” pointless or at least uninteresting.

    • Well just Mark Cuban, Bill Gates, Stephen Hawking, Elon Musk and of course Ray Kurzweil… nobody credible… are loudly asserting that we are about to create intelligence superior to our own that will displace us. They’re actually predicting MORE than AI. I’ve spoken to very senior government officials recently who absolutely believe that they need to prepare for Skynet within a few years. It’s becoming gospel in the minds of a lot of important people, which can be pretty dangerous especially when those of us in the “know” agree that it’s not a danger.

      It’s only an interesting achievement if they discover that a huge tic-tac-toe player suddenly learns to self-identify problems and solve them through invention of new machines and code. Do you think that is what will happen? Is invention capturable with a lookup table?

      Good article too, love the approach.

  2. Alex, that time you managed to make more wrong claims than the right ones.

    The wrong:

    > bigger calculators that are better and better at identifying complex patterns than we are, but NONE of that is relevant to achieving actual intelligence

    Of course identifying complex patterns is relevant to achieving actual intelligence. Pattern recognition is an important component of AGI. On the other hand, pattern recognition by itself is not enough to create AGI (artificial general intelligence) system.

    > neural networks which are just human programmed tools for finding patterns in complex data sets

    That is incorrect. Modern neural networks not networks not only “finding patterns in complex data sets”, but also able to accomplish predefined goals that developers coded into these neural networks. Such goals can include spam elimination or ad serving (similar to how human goals are pain avoidance and appetite for food).

    > WE DON’T KNOW HOW OUR MIND WORKS

    We do know how our mind works, at least in general principle.
    The scientific consensus is not fully established yet, but that is not necessary. Few researchers that got that principle right – are already delivering working intelligent systems. For now these are narrow artificial intelligent systems, but with time these systems stepping into more and more general territory.

    > ACTUAL artificial intelligence must ALSO be able to self-program

    Wrong.
    Human intelligence is not able to self-program. Why are you setting the bar way higher for the artificial intelligence?

    > Neural networks and today’s’ deep learning systems are not a step in the direction of doing any of these things. (1)
    > Massive permutation searches for solutions to finite state problems is probably NOT how a brain works. (2)

    You imply that (2) follows from (1).
    That is a logical fallacy.
    Neural networks vary a lot. Many of them are NOT “massive permutation searches”.

    > The essence of intelligence may emerge from the machinery inside our cells not from the network of wires they form.

    Both “cells” (neurons) and “wires” (axons) are important for intelligent system.

    By the way, both could be simulated logically. E.g. “Neuron” table and “Axon” table in a relational database.

    > absence of any evidence that ANY of us are actually creating anything really intelligent

    There is plenty of evidence that some of us are building really intelligent systems.
    Consider:
    1) Google Search (which by itself consists of intelligent crawler, intelligent parser, and intelligent system that serves search results).
    2) YouTube relevant video suggestions.
    3) Automatic advertising systems (like DoubleClick, Facebook advertising or AdSense) .
    4) Automated [stock] trading systems.
    5) [Sales] leads generation systems.
    All these are very practical tasks that require intelligence, and modern artificial systems actually accomplish these tasks.

    > We are well over 100 years away from achieving true artificial intelligence.

    That is possible.
    There are many moving parts to figure out here. Consider that minor deviations in how human brain works – make people crazy and incapable to function intelligently.
    True AI developers would have to figure out every piece [mostly] right. That takes time.

    However my own forecast is ~50 years. With a lot of very significant practical advancements of AI on the way there.
    Software would become more and more intelligent with every passing year.

    • >>On the other hand, pattern recognition by itself is not enough to create AGI (artificial general intelligence) system
      right… and a tool that humans use to recognize patterns for human minds is not analogous to the tool itself being an advance in intelligence anymore than a hammer is closer to becoming a tool that invents new tools. It’s not “progress” towards a dangerous terminator like AI.

      >>but also able to accomplish predefined goals that developers coded into these neural networks.
      right… which makes them indistinguishable from my C++ compiler. A great tool that I direct to process data and solve problems I specify… again… no progress towards a self deterministic system.

      >>We do know how our mind works, at least in general principle.
      No… we really really don’t. These clowns are actually making progress AWAY from understanding how minds work. Modern models of intelligence are laced with absolutely ludicrous and unsubstantiated assumptions about how our minds work. History is re-pleat with scientists achieving consensus on how minds work and being absurdly wrong over and over again. Over 60% of the cells neurons that make up our minds are glial cells that play a completely unknown role in mediating thought and are completely unrepresented in all neural networking models. So let’s put you on an honest spectrum of how extreme our ignorance on how a mind works really still is…
      https://www.scientificamerican.com/article/the-root-of-thought-what/

      >>Human intelligence is not able to self-program. Why are you setting the bar way higher for the artificial intelligence?
      We code and we make tools, we regularly program our own minds to solve new classes of problems we did not evolve with the circuitry to deal with, our brains self-assemble from cells following no static blue-print. You will not have human like artificial intelligence until you have code that self-improve and more importantly make abstract leaps in insight.

      >>Neural networks vary a lot. Many of them are NOT “massive permutation searches”.
      Any deterministic neural network boils down to being a weighted permutation search for circuit configurations that produce a specific output for a given fuzzy input.

      >>Both “cells” (neurons) and “wires” (axons) are important for intelligent system.
      We actually don’t know that at all. If intelligence arises from cells, then nervous systems are just message carrying symptoms of intelligence not the cause. There are many seemingly intelligent organisms without nervous systems at all. Check out my much read articles on amoebic slime molds. 🙂

      >>There is plenty of evidence that some of us are building really intelligent systems.
      Consider:

      The topic is not “are computers powerful computing tools?” it’s “are we about to achieve human-like artificial intelligence”

      >>However my own forecast is ~50 years. With a lot of very significant practical advancements of AI on the way there.
      If I were a betting man I would tend to agree. We’ll have insights that get us there faster than 100 years assuming there is no zombie apocalypse that throws a wrench in progress.

      >>Software would become more and more intelligent with every passing year.
      yeah again, just because progress took us from making hammers to automobiles doesn’t mean that our tools have gotten any closer to being like us. A realistic photo of a person is no closer to being a person than a crayon drawing.

      • 1) Hammer actually helps to build better tools.
        So that “pattern recognition tools vs hammer” comparison — hints that pattern recognition improvement is likely to help in building AGI.

        2) “makes them indistinguishable from my C++ compiler” – so what? C++ compiler is also one of the tools on the path to AGI. But we need other tools too, and “pattern recognition” is one of such tools.

        3) ” These clowns are actually making progress AWAY from understanding how minds work.”
        What clowns do you mean in particular?
        Different AI researchers have different opinions. Most AI researchers are wrong (which is typical in the situation when working solution is not fully figured out yet), but many are on the right path.

        4) “Over 60% of the cells neurons that make up our minds are glial cells that play a completely unknown role”
        So what? It only means we do not understand some of the low-level details.
        That does not mean that we do not understand the general principle of how human intelligence function.

        5) “We code and we make tools”
        Yes. But:
        – Only few of us code.
        – When we code – we do NOT self-program.
        Nevertheless, all of humans are considered to have general intelligence.
        My point is that we should have similar qualification requirements for an AGI.

        6) “are we about to achieve human-like artificial intelligence”
        And the answer to that is “Yes”.
        With every passing year computers are taking over more and more intelligent tasks that only humans could do in the past.
        In addition to that, computers can do many intelligent things that human intelligence can NOT do.
        The progress is obvious.

        7) “just because progress took us from making hammers to automobiles doesn’t mean that our tools have gotten any closer to being like us”
        There is no guarantee, but quantity of changes slowly transforms into quality of changes, and new tools would actually have enough features to be qualified as AGI.

  3. If you haven’t seen it already, I guarantee you will enjoy this: “Superintelligence: The Idea That Eats Smart People” http://idlewords.com/talks/superintelligence.htm

    I agree with most of your post. Just one quibble: since we don’t know how the mind works, predicting it will take well over 100 years to achieve real AI seems about as ludicrous as predicting it for tomorrow…

    • wow, that is the best blog article ever, I can’t compete with that. Brilliant.

      Well I’m going to quibble with your quibble, the context was about how much computing power it would take to solve quantum dynamics for every atom in a human body. The assumptions being;
      1) That we could, within 100 years, scan the position and energy state of every atom in a human body in order to simulate it
      2) That our mastery of quantum mechanics is such that given the initial state of every atom in a adult human and a computer capable of simulating them, the result would equate to simulating intelligence correctly in a computer even if we still had no idea how it worked.

      One might be persuaded that an atom scanner plus a powerful enough computer represents the outer bound of technological achievement necessary to achieve an unambiguous artificial intelligence without any further progress in gaining insight into how intelligence really works along the way.

  4. I think a lot of folks like to make statements and do things that keep them in the limelight. Also in interviews their words are taken out of context sometimes because the interviewer doesn’t understand the nuances in the nature of the subject matter.

    Any algorithm that has to make a decision on some type of information is making a decision therefore could be construed as intelligent – simple conditional statement. A weapon system coded with such basic branching instructions could have a bug in it that causes the system to do the wrong thing. In short, bugs are a real danger. I think what Hawkings and others are getting concerned about is that the algorithms that make decisions about the world are getting more complex, and therefore bugs are going to get harder to test-for and fix before deployment. Furthermore, because their “flow control” often does not contain explicit, preordained controls in the code, but rather “evolve”, based on inputs with trial and error training, you now have another layer of error – wrong training sets, poor training sets etc.

    So what I took from what they said, without qualifying it, is that we may create systems that due to plain old bugs do the wrong thing and target its masters. Journalists like to take this, or unwittingly do, to spin a narrative of intent within the machine.

  5. Hello Alex St. John,
    I was trying to comment on your
    http://www.alexstjohn.com/WP/2017/04/29/artificial-ignorance-2/
    blog-post complaining about the state of the art in AI, but it was not possible to log in with Google, only Facebook. (I was wrong 🙂

    http://ai.neocities.org/theory.html — is “How the Mind Works”, or the theoretical basis on which I have created six AI Minds in Rexx, Forth, JavaScript and Perl, thinking in English, German and Russian.

    http://strongai.quora.com/AI-Mind-Maintainer — is my recent article on how we will need to create a career-field of “AI Mind Maintainer.”

    That’s all for now,

    Sincerely,

    Arthur T. Murray

  6. Your arguments can be summed up as “we are a long way from achieving AGI.” Which I agree with.

    But then you also imply that Stephen Hawking et al. are foolish to be concerned. I don’t agree that this conclusion follows from the previous argument. If you are Hawking or Musk, the relevant question is whether humankind will EVER develop superhuman intellegence. And if we do, what will happen afterwards? And what can we do in the meantime to improve the odds of a good outcome? Even if it takes 10,000 years, there is still a strong argument to be made that we should start thinking of ways to stay safe, to direct AI development towards safe paths, if such exist. If we have more time than anticipated to come up with clever safety protocols, so much the better.

    But i dont think it wise to dismiss existential threats, even if they won’t happen for a very long time.

    There is zero risk that humankind will abandon its quest to create AI. That is going to happen, one way or another, IF it is at all possible, whether it takes 100 years or 10,000 years. Most likely that will be a positive event, assuming it ever happens. Some people would like to help tilt the odds even more in our favor (of positive vs negative impact to humankind), and I appreciate their efforts.

    • Okay but let’s be honest, Hawking and Musk are not warning us about an existential threat they see in 10,000 years, they’re talking about 30-50. That said, I think people have the ideas all wrong. AI will become an extremely dangerous weapon in the hands of other human beings LOOOONG before there is any path to it becoming a self-motivated threat. In that respect, I think there is a misplaced fear of the tool and not who will be wielding it. Long before an AI thinks for itself it will be developed and deployed to act in the interests of the people who create it. I have further come to the conclusion that there is little danger that AI will ever get out of control. The relatively recent discovery of blockchain technology provides humanity with a powerful tool to perfectly control any digital mind. I would argue that unless somebody makes a dangerous AI on purpose, there is little chance that we will deliberately create AI’s that can possibly run amok. We now have tools that can prevent that eventuality.

Pingbacks

  1. 1 – Artificial Ignorance

Leave a Reply

Follow

Get every new post delivered to your Inbox

Join other followers:

%d bloggers like this: