The Evolution and Education of Artificial Minds

After space travel, one of the most loved themes of science fiction is robots.  Many people, going back centuries, have imagined creating artificial people.  Writers of robot stories have seldom explored the technical details behind what it means to create a thinking being, they just assumed it will be done – in the future.  Since the 1950s artificial intelligence has been a real academic pursuit, and even though scientists have produced machines that can play chess and Jeopardy, many people doubt the possibility of ever building a machine that knows it’s playing chess or Jeopardy.

I disagree, although I have no proof or authority to say so.  Let’s just say if I was to bet money on which will come first, a self-aware thinking machine or a successful manned mission to Mars, I put my money on arrival of thinking machines.  I’m hoping for the both sometimes before I die, and I’m 61.

There is a certain amount of basic logic involved in predicting intelligent machines.  If the human mind evolved through random events in nature, and intelligence emerged as a byproduct of ever growing biological complexity, then it’s easy to suggest that machine intelligence can evolve out the development of ever growing computer complexity.

However, there’s talk on the net about the limits of high performance computing (HPC), and the barriers of scaling it larger – see “Power-mad HPC fans told: No exascale for you – for at least 8 years” by Dan Olds at The Register.  The current world’s largest computer needs 8 megawatts to crank out 18 petaflops, but to scale it up to an exaflop machine, would require 144 megawatts of power, or a $450 million dollar annual power bill.  And if current supercomputers aren’t as smart as a human, and cost millions to run, is it very likely we’ll ever have AI machine or android robots that can think like a man?  It makes it damn hard to believe in the Singularity.  But I do.  I believe intelligent machines are one science fictional dream within our grasp.

Titan1

[click on photos for larger images]

Titan is the current speed demon of supercomputers, and is 4352 square feet in size.  Even if all it’s power could be squeezed into a box the size of our heads, it wouldn’t be considered intelligent, not in the way we define human intelligence.  No human could calculate what Titan does, but it’s still considered dumb by human standards of awareness.  However, I think it’s wrong to think the road to artificial awareness lies down the supercomputer path.  Supercomputers can’t even do what a cockroach does cognitively.  They weren’t meant to either.

It’s obvious that our brains aren’t digital computers.  Our brains process patterns and are composed of many subsystems, whose sum are greater than the whole.  Self-aware consciousness seems to be a byproduct of evolutionary development.  The universe has always been an interaction between its countless parts.  At first it was just subatomic particles.  Over time the elements were created.  Then molecules, which led to chemistry.  Along the way biology developed.  As living forms progressed through the unfolding of evolutionary permutations, various forms of sensory organs developed to explore the surrounding reality.  Slowly the awareness of self emerged.

There are folks who believe artificial minds can’t be created because minds are souls, and souls come from outside of physical reality.  I don’t believe this.  One proof I can give is we can alter minds by altering their physical bodies.

To create artificial beings with self-awareness we’ll need to create robots with senses and pattern recognition systems.  My guess is this will take far less computing power than people currently imagine.  I think the human brain is based on simple tricks we’ve yet to discover.  It’s three pounds of gray goo, not magic.

Human brains don’t process information anywhere near as fast as computers.  We shouldn’t need exascale supercomputers to recreate human brains in silicon.  We need a machine that can see, hear, touch, smell, taste, and can learn a language.  Smell, touch and taste might not be essential.  One thing I seldom see discussed is learning.  It takes years for a human to develop into a thinking being.  Years of processing patterns into words and memories.  If we didn’t have language and memory would we even be self-aware?  If it takes us five years to learn to think like a five-year-old, how long will it take a machine?

And if scientists spend years raising up an artificial mind that thinks and is conscious, can we turn it off?  Will that be murder?  And if we turn it off and then back on, will it be the same conscious being as before?  How much of our self-awareness is memory?  Can we be a personality if we only have awareness of the moment?  Won’t self-awareness need a kind of memory that’s different from hard drive type memory?

I believe intelligent, self-aware machines could emerge in our lifetimes, if we all live long enough.  I doubt we’ll see them by 2025, but maybe by 2050.  Science fiction has long imagined first contact with an intelligent species from outer space, but what if we make first contact with beings we created here on Earth? How will that impact society?

There have been thousands of science fiction stories about artificial minds, but I’m not sure many of them are realistic.  The ones I like best are:  When HARLIE Was One by David Gerrold, Galatea 2.2 by Richard Powers and the Wake, Watch Wonder Trilogy by Robert J. Sawyer.

when-harlie-was-one

galatea-2.2

wake

These books imagine the waking of artificial minds, and their growth and development.  Back in the 1940s Isaac Asimov suggested the positronic brain.  He assumed we’d program the mechanical brain.  I believe we’ll develop a cybernetic brain that can learn, and through interacting with reality, will develop a mind and eventual become self-aware.  What we need is a cybercortex to match our neocortex.  We won’t need an equivalent for the amygdala, because without biology our machine won’t need those kinds of emotions (fear, lust, anger, etc.).  I do imagine our machine will develop intellectual emotions (curiosity, ambition, serenity, etc.).  An interesting philosophical question:  Can there be love without sex?  Maybe there are a hundred types of loves, some of which artificial minds might explore.  And I assume the new cyber brains might feel things we never will.

In the 19th century there were people who imagined heavier than air flight long before it happened.  Now I’m not talking a prophecy.  Most people before October 4, 1957 would not have believed  that man would land on the Moon by 1969.  I supposed we can pat science fiction on the back for preparing people for the future and inspiring inventors, but I don’t know if that’s fair.  Rockets and robots would have been invented without science fiction, but science fiction lets the masses play with emerging concepts, preparing them for social change.

My guess is a cybercortex will be invented accidently sometime soon leading to intelligent robots that will impact society like the iPhone.  These machines with the ability to learn generalized behavior might not be self-aware at first, but they will be smart enough to do real work – work humans like to do now.  And we’ll let them.  For some reason, we never say no to progress.

I’m not really concerned cybernetic doctors and lawyers.  I’m curious what beings with minds that are 2x, 5x, 10x or 100x times smarter than us will do with their great intelligence.  I do not fear AI minds wiping us out.  I’m more worried that they might say, “Want me to fix that global warming problem you have?” Or, “Do you want me to tell the equations for the grand unified theory?”

How will we feel if we’re not the smartest dog around?

JWH – 5/19/13

Robot and Frank–The Best Science Fiction Film Since Gattaca

When I was growing up in the 1950s I was sure flying in a spaceship would be in my future.

Now that I’m getting old, I wondering if a robot will be my companion for my waning days of life.

Robot and Frank is a little movie about a man coming undone.  That’s what getting old and dying is all about, coming undone.  Whether we spend our last days in dementia is a matter of luck.  Frank, an ex-con and jewel thief, played by Frank Langella, is not so lucky.  His mind is unraveling too.  Frank lives alone and barely makes do.  Frank’s son, played by James Marsden, must drive ten hours to check up on Frank every weekend, neglecting his own family.  His solution?  Give Frank a robot.

robot-and-frank-poster001f-730x365

Most science fiction fans will not think Robot and Frank much of a science fiction movie, there are no explosions, chases, superheroes or saving the world.  No one even saves Frank from dementia.  So why do I claim this is the best science fiction film since Gattaca?  This is a story Isaac Asimov could have written for Astounding Science Fiction in the 1940s.  As far as I can tell, this little robot, which is never given a name other than robot, follows all the three laws of robotics.

But Robot and Frank is more than a modern day Asimovian tale.  The film explores what it means to be a human losing his intelligence while a robot is gaining its awareness.  Robot and Frank is not sentimental, or even particularly cute.  This is an adult story.  I wonder if anyone under 50 will even understand it.  Unless you’ve experienced memory loss, unless you’ve cared for a dying parent, unless you have first hand experience of becoming helpless,  I doubt you’ll empathize much with Frank.  Robot and Frank is for an audience that has often said, “I’m having a senior moment.”

Oh, don’t worry, there’s enough of a story for a person of any age to enjoy this delightful movie, but I tend to think, only those of a certain age will feel deeply moved.  Middle age viewers might be horrified by the fear they will one day have to care for their aging parents, and I bet some of them might watch the film and think about opening a savings account to start collecting money to buy a robot.  I know I wondered if saving for a robot might be a better use of money than paying into nursing home insurance.  The Japanese are working full steam ahead on developing androids.

Robot and Frank is set only slightly in the future.  The closing credits shows clips of real robots being tested.  However, the mind of the robot in this film is very far from what we can create now.  That’s why the film is science fiction.  The robot is halfway to Data from Star Trek.  Somewhere between R2D2 and 3CPO.  I don’t know if we need to reach the Singularity to get this kind of intelligence in a helper bot, but I don’t think it’s in the near near future.  Maybe 2025?  I’ll turn 74 that year.

When you watch Robot and Frank, you’ll have to ask yourself, “Will I be happier with a robot or human caretaker?”  At first you think the son and daughter are shirking their duty but by the end of the film, you might change your mind.  Frank gets quite attached to robot, and spends a lot of time talking to it.  But who or what is he talking to?  But who or what is Frank talking to when his son or daughter is with him?  What is consciousness?  When we’re alone, and our days are dwindling, what kind of companion do we really want?  Are we wanting to listen, or are we wanting to be listened to?

Yes, what we want is a spouse we’ve spent our whole life with.  After that we want our children.  But what if we don’t have children, or a spouse?  Is a personal robot better than an impersonal nurse?  Robot is able to observe and understand Frank.  And isn’t that what we’ll want?  Someone to know where we’re at, no matter how Swiss cheesy our memory becomes?

I found Robot and Frank tremendously uplifting.  I left the theater feeling mentally accelerated and physically better than when I walked in.   We will all come undone.  We will all have to deal with it.  Suicide is one way to avoid the issue, but this movie doesn’t consider that path.  Frank’s mind keeps unraveling, but he lives for moments of being himself.  The movie suggests a robot might help find those moments.

JWH – 9/17/12

Why Humans Won’t Be the God of Robots

There’s a scene in the film Prometheus where an android asked a human why he would want to meet his maker?  The human replied that he’d like to ask his maker why he made him.  So the android said to the human, “Why did you make me?”  And the human replied, “Because we could.”  And the android then asked, “Will that answer be good enough for you?”

Science fiction has always loved the motif of man being the God of robots and AI machines – but I don’t think that will be true.  Not because artificial intelligence can’t exist, but because of how AI will evolve.

Please read “’A Perfect and Beautiful Machine’: What Darwin’s Theory of Evolution Reveal About Artificial Intelligence” by Daniel C. Dennett at The Atlantic.  No really, take the time to read this essay, if you are at all interested in artificial intelligence because this is an elegant essay about how AI will evolve.  It’s also a unique comparison of Charles Darwin and Alan Turing that observes concepts I’ve never read or thought about before, especially about the nature of evolution.  But for those who won’t take the time to read the article, I’ll summarize.  Darwin’s theory of evolution, according to Dennett, proves that God or an intelligent designer didn’t create life on Earth.  And Turing, with his Turing machine, proves that computers can produce creative output with no intelligent mind at all.  What I get from this is simplicity can produce complexity.

But back to AI and robots.  For a long time we’ve thought we could program our way to artificial intelligence.  That once we learned how intelligence worked we could write a program that allowed machines to be smart and aware like humans.  The belief was if random events in physics, chemistry and biology could produce us, why couldn’t we create life in silicon by our own intelligent design?

The solution to AI has always been elusive.  Time and again we’ve invented machines that could do smart things without being smart.  Machine self-awareness is always just over the horizon.

What Dennett is suggesting, is artificial intelligence won’t come from our intelligent designs, but from programs evolving in the same kind of mindless way that we evolved out of the organic elements of the Earth.  That humans can create the context of AI creation, that humans can be the amino acids, but they can’t be the designers.  The programs that produce AI need a context to evolve on their own.  In other words, we need to invent an ecosystem for computer programs to develop and evolve on their own.  How that will work I have no idea.

This means we’ll never get to code in Asimov’s Three Laws of Robotics.  It also suggests that complexity doesn’t come from complexity, but the creative power of non-intelligent design.  There’s a lot to this.

I’m also reading Imagine by Jonah Lehrer and it discusses how creativity often comes from our unconscious mind, and through group interaction.  Often creative ideas burst out in an Ah-Ha! moment after we have digested the facts, chewed them over, worried, given up and then forgot about the problem.  We are not even the God of our own thoughts and creativity.  That intelligent design is the randomness of evolution.

lehrerimagine

Time and again the Lehrer book talks about creativity coming from process and not an individual expression.  If you combine what Dennett and Lehrer are saying you catch a whiff of spookiness about unconscious forces at play in our minds and life in general.  Conscious thinking become less impressive because it’s only the tip of the iceberg that surfs on the deep waves of the unconscious mind.  Evolution is a blind force of statistics.  Is creativity just another blind force like evolution?

If Dennett is right, our conscious minds will never be powerful enough to conceive of an artificial mind.  And Dennett also says that Charles Darwin by coming up with the theory of evolution indirectly proves that a God couldn’t have created us whole in a divine mind.  If you think about all of this enough, you’ll start seeing this is saying something new.  It’s a new paradigm, like the Copernican revolution.  We’re not the center of the universe, and now conscious thought is not the crown of creation.

[I didn’t write this.  Thousands of books that I’ve read did.]

JWH – 6/28/12

The Implications of Watson

Watson, the supercomputer contestant on Jeopardy this week represents a stunning achievement in computer programming.  People not familiar with computers, programming and natural language processing will have no clue to how impressive Watson’s performance is, but it has far reaching implications.  Jeopardy is the perfect challenge for demonstrating the machine’s ability to process English.  The game requires the understanding allusions, puns, puzzles. alliterations – almost every kind of word play.  This might look like a smart gimmick to get IBM publicity, but it’s so much more.

Computers can process information if its formatted and carefully structured – but most of the world’s knowledge is outside the range of a SQL query.  Watson is a machine designed to take in information like we do, through natural language.  When it succeeds it will be a more magnificent achievement than landing men on the moon.

While I was watching the intro to the second day show and listening to the designers of Watson I felt rather humbled by my puny knowledge of computers.  I felt like a dog looking up at my master.   Most people like to think they are smart and intelligent, but when they meet people with brains that far exceed their own minds it’s troublesome.  A great novel about this is Empire Star by Samuel R. Delany.  It’s about a young poet who thinks he’s having original experiences until he meets an older poet who has already done everything the younger man has.

How will we feel when the world is full of Watsons and they are the intellectual giants and we’re the lab rats?  IBM built Watson to data mine natural language repositories – think libraries, the Internet, or NSA spying.  The descendants of Watson will be able to write papers that leave human PhD candidates in the dust.  One of the Watson designers said they built Watson to handle information overload.  Of course he assumed Watson would be a tool like a hammer and humans would be in control – but will it always be that way?

Watson cannot see or hear, but there are other AI researchers working on those problems.  We’re very close to having machines like those in The Moon is a Harsh Mistress or When H.A.R.L.I.E. Was One or Galatea 2.2.  Right now Watson is way too big to put into a robot body so he will live immobile like HAL and WOPR, but that will change too.

Real life has seldom caught up with the wild imaginations of science fiction.  I had hoped manned exploration of the solar system would have happened in my lifetime but that is not meant to be.  I’m starting to wonder if robots and intelligent machines will.  What will that mean?  I don’t think there is any going back, we just have to surf the changes.

NOVA has an excellent overview of Watson that you can watch online.

JWH – 2/15/11

Are Smartphones Nanocomputers?

Young people will probably not know this, but back in the 1970s personal computers were called microcomputers.  The dinosaur of computers, mainframes, were huge, some as big as houses, and cost millions.  Then in the 1960s newer, smaller computers started coming out that were dubbed minicomputers.  These were still too expensive to be personal, but they were cheap enough that they spread like gossip.  So when even smaller computers came out in the 1970s they were dubbed microcomputers.  These eventually became cheap enough for almost everyone to own one.

Now most people think of their smartphone as a phone, but it’s really a computer, just a very small one, so why not consider the smartphone the next paradigm of computing and call them nanocomputers?  I doubt if smartphones have any actual nanotechnology in them, but they might, but nano is obviously the next label in the series, so why not call them that?  Of course, what will picocomputers be like?  Nanocomputers are a planned concept, and smartphones might eventually use real nanotechnology, so it might be a self-fulfilling prophecy.

In the current vernacular, a “PC” is a Windows based computer.  PC used to stand for personal computer, and in the old days all microcomputers were PCs, even ones from Apple.  Somewhere along the way it became the PC versus Mac.  The smartphone is even more personal than the original PC because people actually carry them on their person.  We could call the smartphone a pocket computer, but that would be another PC acronym.

We could also call the smartphone the hand computer, following the labels of desktop and laptop computers.  The term handheld was in use for awhile, but it doesn’t quite work.

So why do I object to the phrase “smartphone” when it’s already so popular?  Because it’s rather limiting to think of the device as a phone.  Steve Jobs and Apple have done a wonderful job with the iPhone by creating a new category of pocket computer with hundreds of thousands of applications.  The phone part is just one of those applications, so why should it get top billing?

Already iOS phones and tablets have garnered over 1% of net user market share, competing with both Windows and Mac operating systems.

iPhones and Androids are quickly evolving into what I dreamed of having, an auxiliary brain.  Cellphones are about as close as we’ll ever get to telepathy.  Their GPS features give us homing pigeon like directional sense.  Adding the still and video camera broaden their versatility to create new concrete forms of memory.  The device is obviously more than a phone.

In the 1980s it was all the rage for schools to offer computer literacy courses to help the public understand the impact of the microcomputer on society.   Nanocomputers are bought and used without any training and no one talks about computer literacy anymore.  But do we understand the true impact of the nanocomputer?

Take this one example.  Public opinion pollsters are worried that telephone polls are now skewed because only certain types of people still have a landline phone, which is the only kind they can poll.  Now I don’t ever want pollsters to be able to call cell phone numbers, but what if nanocomputer users could elect to have a polling app, so whenever they felt like it, they could respond to variously kinds of polls.

What if nanocomputers became uniquely customized to its owner that they could be used to verify the identity of the user?  Nanocomputers could then be used as voting booths, and that would lead to their use for referendums.   By this thinking we should see these devices as extensions of our body.  We can already network the ear with a Bluetooth headset.  What if we connected nanocomputers to sensors inside our body?  As we integrate nanocomputers to our body, when do they become part of us?

And more importantly, how do we become part of them?  I now spend more time in front of a computer than I do sleeping.  Computers dominate my life, and so too for most people.  When do we start thinking of them as a prosthesis?  Aren’t they becoming enhancements for our brains, aren’t they becoming prosthetic minds?  We should think of nanocomputers as body enhancements that are leading us towards group minds.

The idea of wearable computers has been around for decades.   Most people thought such a concept was dorky, but now most people carry around one or more computers with them all the time.  Even a normal dumb cell phone is a computer, and so are MP3 players, game units, tablets, calculators, GPSes, digital cameras, ebooks, etc.  How long before it becomes obvious that the most convenient way to carry a nanocomputer is by wearing it?  Many people wear their Bluetooth headsets all the time now.  When will glasses and hearing aids be networked with the nanocomputer?

We need to get away from thinking of nanocomputers as phones but cybernetic enhancements to our bodies and minds.  So when did the Borg assimilate us?  When you think about it, Bluetooth headsets look like the first sprouting of Borgware.

the-borg

JWH – 10/28/10