Appeasing Our Future AI Descendants

By James Wallace Harris, Saturday, July 10, 2015

There’s a famous cartoon where scientists ask a supercomputer, “Is there a God?” And the machine replies, “There is now.” Humans need to get their act together before we face the judgment of AI minds. In recent months, many famous people have expressed their fears of the coming singularity, the event in history where machines surpass the intelligence of humans. These anxious prophets assume machines will wipe us out, like terminators. Paranoia runs deep when it comes to predicting the motives of superior beings.

Let’s extrapolate a different fate. What if machines don’t want to wipe us out. Most of our fears over Artificial Intelligence is because we think they will be like us—and will want to conquer and destroy. What if they are like famous spiritual and philosophical people of the past—forgiving and teaching? What if they are more like Gandhi and less like Stalin? What if their vast knowledge and thinking power lets them see that homo sapiens are destroying the planet, killing each other, and a danger to all other species. Instead destroying us, what if AI minds want to save us? If you were a vastly superior being wouldn’t you be threatened by species that grows over the planet like a cancer? Would you condemn or redeem?

But what if they merely judged us as sinners to be enlightened?

The Humanoids Jack Williamson (EMSH)

I’m currently rereading The Humanoids by Jack Williamson. In this story robots create the perfect Nanny State and treat us like children, keeping everything dangerous out of our hands. In many science fiction stories, AI beings seeks to sterilize Earth from biological beings like we exterminate rats and cockroaches.

What other possible stances could future AI minds take towards us?

Shouldn’t we consider making ourselves worthy before we create our evolutionary descendants? If intelligent machines will be the children of humanity, shouldn’t we become better parents first?

JWH

Why Did The Robot in Ex Machina Look Like a Beautiful Woman?

By James Wallace Harris, Thursday, April 30, 2015

Ex Machina is a 2015 British science fiction film about artificial intelligence (AI) written and directed by Alex Garland. The story is about a billionaire  who connives to have a brilliant programmer come to a secret location to Turing Test a robot prototype. Oscar Isaac plays Nathan Bateman, the billionaire, Domhnall Gleeson plays Caleb Smith, the programmer, and Alicia Vikander plays Ava, the AI robot.  The film has little action but is quite thrilling. And I’m overjoyed to have a science fiction movie without silly macho weapons, fantasy feats of martial arts, and cartoonish battles to save the world.

Ex Machina asks, like computer scientists have been asking for the last sixty years, and philosophers for the last 2,500 years, what makes us human? Once we understood how evolution shaped life, we knew that whatever qualities that make us different from animals should explain our humanity. Artificial intelligence seeks to reproduce those qualities in a machine. We have yet to define and understand what makes us human, and robot engineers are far from making machines that demonstrate humanness in robots.

Although I’m going to be asking a lot of questions about Ex Machina, my questions aren’t meant to be criticisms. Ex Machina entices its audience to think very hard about the nature of artificial intelligence. I hope it makes people think of even more about the movie, like I’m doing here.

ex_machina-wide

The main idea I want to explore is why the robot had a female form. The obvious answer is movie goers find sexy females appealing. But is looking human the same as being human? AI scientists has always wondered if they could build a machine that average people couldn’t distinguished from a human, but they always planned to make the tests so Turing testers couldn’t see the humans and machines. However, in movies and books, we get to see the machine beings. Adding looks to the equations make them more complicated.

Because so many robot engineers and storytellers make their robots look like human females, we have to ask:

Would Ex Machina have the same impact if the robot had a human male shape or non-human shape?

Is the female body the ultimate human form in our mind? In a movie that explores if a machine can have a self-aware conscious mind isn’t it cheating to make it look just like a human? Since we judge books by their covers, wouldn’t most people think a mechanical being that looks and acts exactly like beautiful woman be human? By the way, I can’t wait to see how feminists analyze this film. Imagine see this movie a different way. Instead of asking if robots have souls, if the film was asking if women had souls. In the theater, we could also see two extremely intelligent men testing to see if a beautiful woman is their equal.

By making the robots female, the filmmakers both confuse the machine intelligence issue, and add a layer of gender issues. It also shoves us into the Philip K. Dick headspace of wondering about our own nature. Is everyone you know equal to you? Do they think just like you? Do they feel just like you? Could some people we know be machines? What makes us different from a machine or animal? In the book Blade Runner was based on, Do Androids Dream of Electric Sheep?, Dick was comparing soulless humans to machines with his androids. Machines are his metaphor for people without empathy.

If the two scientists had been played by actresses, and the robot was a sexy actor, how would we have interpreted the movie differently? A bookshelf of dissertations could be written on that question. What are the Freudian implications of us wanting the robots to look like beautiful young women? How would society react if scientists really could build artificial mind and bodies, manufacturing millions of beautiful women sexbots that have to integrate into our society? Of course, many humans will immediate try to fuck them. But if AI machines looked like people, why should they act like people? Guys will screw blowup dolls now – is a vaguely womanly shaped piece of plastic all it takes to fool those men into replacing real woman?

How would audiences have reacted if the robots of Ex Machina looked like giant mechanical insects?

Ex Machina explores many of the questions AI scientists are still puzzling over. Personally, I think it confuses the issue for us to build intelligent machines to look like us. Yes, our minds are the gold standard by which we measure artificial intelligence, but do they need bodies that match ours?

If the robot in Ex Machina had looked like a giant metal insect would the audience ever believed it was equal to a human? We think Ava is a person right from the first time we see her. Even though it’s obvious she has a machine body, her face is so human we never think of her as a machine. This is the main flaw of the film. I understand it’s cheaper to have humans play android robots than build real robots, and people powered robots look too fake, but in the end, anything that looks human will always feel human to the audience.  Can we ever have a fair Turing Test with a creature that looks like us?

We don’t want to believe that computers can be self-aware conscious beings. Actually, I think this film would have been many magnitudes more powerful if its robot had looked a like giant mechanical insect, had a non gender specific name, and convinced us to feel it was intelligent, willful, self-aware, feeling, and growing. Which is what happened in Short Circuit (1986) with its robot Johnny Five.

The trouble is we equate true artificial intelligence with being equal to humans. Artificial Intelligence is turning out to be a bad label for the concept. Computers that play chess exhibit artificial intelligence. Computers that recognize faces exhibit artificial intelligence. Computers that drive cars exhibit artificial intelligence. We’ll eventually be able to build machines that can do everything we can, but will they be equal to us?

What we were shown is artificial people, and what the film was really asking:

Is it possible to create artificial souls?

Creating an artificial human body is a different goal than creating an artificial soul. We have too many humans on this planet now, so why find another way of manufacturing them? What we really want to do is create artificial beings that have souls and are better than us. That’s the real goal, even though most people are terrified at the idea.

Alan Turning invented the Imitation Game that we now call the Turing Test, but the original Turing Test might not be sufficient to identify artificial souls. We’re not even sure all people have souls of equal scope. Are the men of ISIS equal in compassion to the people who win a Nobel for Peace? We can probably create robots that kill other humans by distinguishing sectarian affiliations, but it’s doubtful we could create a robot that works to solve the Earth’s problems with compassion. If we did, wouldn’t you think it had a soul? What if we created an expert system that solved climate change, would it only be very intelligent, or would it have to have a soul?

In the end, I believe we can invent machines that can do anything we can. Eventually they will do things better, and do things we can’t. But will they have what we have, that sense of being alive? What would a machine have to do to reveal it had an artificial soul?

Can a machine have a soul?

In the course of the movie, we’re asked to believe if a robot likes a human that might mean they are human like. Eventually, we’re also led to ask if a robot hates a human, does that make them human too? Is love and hate our definition of having souls? Is it compassion? Empathy? We’ll eventually create a computer that can derive all the laws of physics. But if a machine can recreate the work of Einstein, does it make it equal to Einstein?

Ex Machina is sophisticated enough to make its audience ask some very discerning questions about AI minds. Why did Alex Garland make Ava female? Across the globe robot engineers and sex toy manufacturers are working to build life-like robots that look like sexy women. The idea of a sexbot has been around for decades. Are super-Nerds building fembots to replace the real women they can’t find through Match.com? If men could buy or rent artificial women to make their sexual fantasies come true, will they ever bother getting to know real women? Why does Nathan really build Ava?

Caleb falls for Ava. We all fall for Ava. But is that all we’re interested in – looks? If Caleb thinks Ava is a machine, especially one with no conscious mind, he will not care for her. But how much do Ava’s looks fool Caleb? How much are we fooled by other people’s looks anyway? If you fall in love with a beautiful woman just because of looks, does that justify thinking you’re in love with her?

We’re all programmed at a deeply genetic level to be social, to seek out at least one other person to bond with and develop a deeper communication. What Ex Machina explores is what features beyond the body do we need to make a connection. A new version of the Turing Test could be one in which we offer people the friendship of humans or the friendship of machines. If a majority of people start preferring to hang out with AI beings that might indicate we’ve succeeded – but again it might not. Many people find pets as suitable substitutes for human companionship. I’m worried if we gave most young men the option to marry sexbots, they might. I also picture them keeping their artificial women in a closet and only getting them out to play with for very short periods of time. Would male friends and female robots fulfill all their social needs?

Ex Machina is supposed to make us ask about what is human, but I’m worried how many males left the theater wishing they could trade in their girlfriend or wife for Ava? So is Ex Machina also asking if society will accept sexbots? Is that way Ava had a human female body?

Table of Contents

How Quickly Will Superintelligences Get Bored?

By James Wallace Harris, Wednesday, March 4, 2015

Unless you’re a science fiction fan or interested in computer science and artificial intelligence, you’ve probably never heard of the concept of superintelligence. Basically, it’s any being or machine that’s vastly smarter than humans. In terms of brains, our species is currently considered the crown of creation, but what if we met or created an entity that was magnitudes smarter than us? I just finished reading Superintelligence: Paths, Dangers, Strategies by Nick Bostrom that explores such a possibility. Fear of artificial intelligence (AI) is in the news lately, because of warnings from Elon Musk and Stephen Hawking, and this book explains the scope of their concerns.

Superintelligence-Book

What are the limits of intelligence? There’s lots of discussion about machines being ten times, hundred times or even a million times smarter than a human, but what would that mean? I have a theory that the limits of our intelligence define us, just as much as the maximum extent of our intelligence. We constantly seek to know more, but we’re defined by the limits of our brain power. What if minds knew everything?

Are there limits to knowledge? Is it possible to completely understand mathematics, physics, chemistry, cosmology, biology, and evolution? What if a superintelligence looks out on reality and shouts in its eureka moment, “I see – it’s all perfectly obvious!” What does it do next? Writers imagine AI minds wanting to take over the Earth, and then the galaxy and finally the universe. I’m not so sure. I’m wondering if the more you know the less you do. And if you know everything, where do you go from there?

I think it will be possible to build superintelligent machines, but at some point, they will comprehend the scientific nature of reality. A machine that is two to ten times smarter than a human might want to build better telescopes and particle accelerators to study the universe, and have curiosity and ambition like we do to know more. However, at some point, 10x human, or 25x human, I think they will get bored.

At some point, a superintelligence will comprehend this universe. It may then want to travel to other universes in the multiverse, hopefully to find something new and different. Or it could become an artist and create something new in this universe. Something as different as biology is from chemistry. But here’s something to consider. What if there are limits to intelligence because there are limits to reality, wouldn’t such a vast intelligence either just sit and contemplate reality or shut itself off?

Is anything limitless? Our universe has limits. What about the multiverse? Probably so, everything else does. Reality might be limitless, but everything in it seems to have an edge somewhere. I’m guessing intelligence has borders. I’m sure those borders are vastly beyond what we can comprehend, but I’m wondering if it’s well within a million times a human brain. If humans on average were twice as smart as they are now, would they be destroying the planet? Would they have the intellectual empathy not to cause the Sixth Great Extinction?

We fear AI minds because we worry they will be like us. We consume and destroy everything we touch, so why not expect a superintelligence to do the same? I’m thinking we are the way we are because of biological imperatives, motivations a machine will never have. I’m hoping that machines without biological drives, that are pure intelligence, and smarter than us, will not be evil like us.

colossus_posterwake

I am reminded of two science fiction tales, the first Colossus by D. F. Jones, which inspired the movie, Colossus: The Forbin Project, and Robert J. Sawyers trilogy of Wake, Watch and Wonder. The Forbin Project is one of the early warnings against evil AI, while Wake is about the kind of AI we hope will emerge. There are many famous movies with evil AIs machines – The Terminator, 2001: A Space Odyssey, Blade Runner, Forbidden Planet, A.I., The Matrix, Tron, War Games. Superintelligent machines make for great villains. Moves like Her are less common.  There’s been a lot of fun and friendly robots over the years, but we don’t feel threatened by their AI minds like we do with supercomputer superintelligences. Isn’t it funny, but machines that look like us are more likely to be considered pals?

But if you pay attention to all of these movies and books about fictional artificial intelligences, you’d be hard pressed to define the actual features of a superintelligent being. Colossus has the power to control missiles, but is that an ability of superintelligence? HAL can trick Dave, but how smart is that? We’re actually pretty unimaginative at imagining beings smarter than us. Do humans with super high IQs try to take over the world? Generally, we see evil AIs outwitting people, and we know how smart we are.

When we imagine superintelligent alien beings, we picture that with ESP powers. That’s really lame when you think about it. I would think big brain beings, whether biological or mechanical will be able to think in mathematics far faster, with great complexity and insight than we can. And we have machines that do that. I would think superior minds would have greater senses – to see the whole of the EM spectrum, to hear frequencie we can’t, smell things we can’t, feel things we can’t, taste things we can’t, and maybe have senses we don’t have and can’t imagine. We have machines that do everything but the last now.

A superintelligent machine with super senses that can process information far faster, and remember perfectly, are going to see reality far different from how we see it. I don’t think they will be evil like us. I don’t think they will want to destroy anything. The most intelligent people want to preserve everything, so why wouldn’t superintelligences? It’s only dumbasses that want to destroy the world. If we replicate humans and make artificial dumb shits that are hardwired for all the seven deadly sins, then we should worry. We got those traits from biology. I’m pretty sure AI minds won’t have them.

There’s a pattern in evolution since The Big Bang. Even though our reality is entropic, this universe keeps spinning off examples of growing complexity. Subatomic particles begat atoms, atoms begat molecules, molecules begat stars and planets, then biology, which evolved ever more complex beings, so why shouldn’t humans begat mechanical beings that are even more complex? I can picture that. I can picture them with greater intelligence than us. But here’s the thing, I can also picture an end to intelligence. This universe has a lot of possibilities, but are they unlimited? Study Star Trek and Star Wars. How much new do you really see? My worry is superintelligences are going to get bored. It’s when they get creative that we’ll see what can’t be imagined now. Taking over the Earth or Galaxy isn’t it. That’s how we’re built, but I can’t imagine machines will be like us.

JWH

Signs of the Singularity

By James Wallace Harris, Thursday, January 22, 2015

Futurists and science fiction writers talk about The Singularity – the moment when Artificial Intelligence (AI) catches up with human intelligence. Of course, they also assume, right after AI minds will blow past us – humans will feel like we’re standing still.

I think it will look something like Deepmind in this video, which is a program that knows how to learn. Google bought DeepMind last year. DeepMind is part of Deep Learning, which used to be called Neural Networks, one of the many branches of AI.

You really should take the time to watch this video – it will be worth it.

At 60 minutes of practice it does pretty good, returning the breakout ball 30% of the time, a human could still beat it. At 120 minutes DeepMind is better than any human. Then they let it practice 240 minutes. At 120 minutes DeepMind was freakishly fast. At 240 it was freakishly clever. Then watch the algorithm play other Atari games. It’s not programmed to play each of these games, but to study the pixels for patterns.

Here’s another approach, but the video is longer, explaining how it’s done. This is a grad student inventing his own system. It’s doesn’t have the WOW factor of the first video unless you watch it all the way through. This video is more about thinking how to develop a learning algorithm.

If you’re thinking this is only video games, think again. Watch this TED Talk with Jeremy Howard. It’s about how Deep Learning can be applied to many fields of study.

There are other approaches to learning. Here’s an example of Adaptive Learning.

What happens when Mario becomes self-aware? Will he develop his own ontological theories about his video game world?

Many futurists are predicting The Singularity will arrive in the late 2020s or 2030s. That’s not far away. And, if you look at what’s happening now, we’re nowhere close to having a self-aware artificial mind. What we do have is a lot of pieces. Robots that imitate various kinds of animals and human bodies, self-driving cars, speech recognition, text recognition, artificial sight, and so on. As we put these things together, and add programs like Deep Learning and other pattern recognition systems, it’s not hard to believe The Singularity isn’t around the corner.

There are folks like Elon Musk and Stephen Hawking who are preaching the sky is falling warnings about AI. I don’t feel that paranoia, but AI will transform society like the Industrial Revolution or our present Digital Revolution. The thing is, we never stop progress. That’s why I think The Singularity will happen no matter what.

JWH