Would You Nap in a Self-Driving Car?

By James Wallace Harris, Tuesday, July 14, 2015

It won’t be long before self-driving cars will be common. First tested in California, they’re now being let loose in Texas. It’s doubtful you’ll see one real soon, but maybe by 2020 or 2025. Since I easily remember a time before smartphones, that will be fast enough. We’ll go through a phase where regular cars will get more and more auto-pilot features, but sooner or later we’ll have cars without steering wheels.


But I’m wondering how many people will feel safe in such cars? It sounds a bit creepy to me. But what if they turn out to be perfectly safe? Would you feel comfortable enough to take a nap while zooming down the expressway? Would you send your kids off to school without going with them? In another twenty years I’ll be reaching an age where I should give up my keys–self-driving cars might extend my years of autonomy.

Will we reach a time where a human driving a car will scare us?

How will you feel seeing cars tooling down the highway with no people in them? It might be practical to go to work and tell the car to go home so another family member could use it. It might be possible to have taxis, Uber and Lyft vehicles roaming the roads without drivers.

I can remember a time before cellphones, personal computers, the internet, and a bunch of other technological marvels. I’m not that old at 63, but I’m reaching an age where so much change is wearisome.  I remember talking to my grandmother, who was born in 1881,  about her life before cars, planes, radios and televisions. I’m sure she met old folks who remember times before telegraphs and steam engines. Before these speeded up centuries our species often went hundreds or thousands of years without much change. Neanderthals went tens of thousands, even hundreds of thousands of years, without much change.

I wonder why everything has gotten so speeded up lately? Will things ever slow down again?

We need to expect other kinds of changes, more than the constant change of gadgets. Imagine economic and social changes. If cars are smart enough to drive themselves, why should we own them? Why not let them seek their own most efficient utilization? If you combined ride sharing with robotic cars we’ll drastically change the whole economy, and maybe help the environment.

Yet, that will put a lot of people out of work. Are we really sure we want the future we’re rushing into?


Should Robots Be A Major Political Issue in 2016

By James Wallace Harris, Sunday, July 12, 2015

We need to decide if we really want robots. Why are we working so diligently to build our own replacements? We need to decide before its too late.


As Democrats and Republicans declare themselves candidates for president in 2016, they each scope out issues they hope will define their electability. Donald Trump has gotten massive free PR by making very ugly statements about immigration. Bernie Sanders is staking claims around fair income and wealth inequality. None of the candidates have focused what I consider the defining issue for the next president—climate change. However, I’m also discovering a growing number of reports about automation, robots and artificial intelligence to make me wonder if robots shouldn’t be second to climate change on the 2016 party platforms.

Climate change, automation and wealth inequality are all interrelated. Illegal immigration is a minor issue in comparison. In fact, most of what the current crop of candidates focus on are old-moldy issues that are far from vital to our country. The 2016 election will define our focus until 2020, or even 2024. We’re well into the 21st century, so it’s past time to forgot about 20th century issues.

If you doubt me, read “A World Without Work” from the latest issue of Atlantic Monthly. Derek Thompson does a precise job of stating his case, so I won’t repeat it. Let’s just say, between automation and wealth inequality, there’s going to be a lot of people without jobs, and the middle class will continue to shrink at an even faster rate. Bernie Sanders political sniffer is following the right trail that will impact the most voters. Reporters should trail Sanders and not go panting after Trump. Follow smart people, not fools.

Another way to grasp the impact of the robot revolution is sign up for News360.com and follow the topic robotsmanufacturing automation, machine learning, natural language processing and artificial intelligence. Over a period of time you’ll get my point. Our society is racing to create intelligent machines. I’m all for it, but I’m a science fiction geek. If we don’t want to make ourselves into Neanderthals, we should think seriously about evolving homo roboticus. Being #2 in the IQ rankings will suck. But then if we embrace plutocracy and xenophobia, maybe we deserve to be replaced by AI machines.

If all of this is too much trouble, and you just want learn through the emotional catharsis of fiction, watch the new TV show, Humans on AMC. The show covers all the major robot issues, and sometimes in subtle ways. So spend some time thinking about the individual scenes in this show. Humans is very creative. Then start flipping the channels and pay attention to how often robots and AI come up in other shows. It’s like all the water is rushing away from the shorelines and we need to worry about when the tsunami will hit us.


Appeasing Our Future AI Descendants

By James Wallace Harris, Saturday, July 10, 2015

There’s a famous cartoon where scientists ask a supercomputer, “Is there a God?” And the machine replies, “There is now.” Humans need to get their act together before we face the judgment of AI minds. In recent months, many famous people have expressed their fears of the coming singularity, the event in history where machines surpass the intelligence of humans. These anxious prophets assume machines will wipe us out, like terminators. Paranoia runs deep when it comes to predicting the motives of superior beings.

Let’s extrapolate a different fate. What if machines don’t want to wipe us out. Most of our fears over Artificial Intelligence is because we think they will be like us—and will want to conquer and destroy. What if they are like famous spiritual and philosophical people of the past—forgiving and teaching? What if they are more like Gandhi and less like Stalin? What if their vast knowledge and thinking power lets them see that homo sapiens are destroying the planet, killing each other, and a danger to all other species. Instead destroying us, what if AI minds want to save us? If you were a vastly superior being wouldn’t you be threatened by species that grows over the planet like a cancer? Would you condemn or redeem?

But what if they merely judged us as sinners to be enlightened?

The Humanoids Jack Williamson (EMSH)

I’m currently rereading The Humanoids by Jack Williamson. In this story robots create the perfect Nanny State and treat us like children, keeping everything dangerous out of our hands. In many science fiction stories, AI beings seeks to sterilize Earth from biological beings like we exterminate rats and cockroaches.

What other possible stances could future AI minds take towards us?

Shouldn’t we consider making ourselves worthy before we create our evolutionary descendants? If intelligent machines will be the children of humanity, shouldn’t we become better parents first?


Why Did The Robot in Ex Machina Look Like a Beautiful Woman?

By James Wallace Harris, Thursday, April 30, 2015

Ex Machina is a 2015 British science fiction film about artificial intelligence (AI) written and directed by Alex Garland. The story is about a billionaire  who connives to have a brilliant programmer come to a secret location to Turing Test a robot prototype. Oscar Isaac plays Nathan Bateman, the billionaire, Domhnall Gleeson plays Caleb Smith, the programmer, and Alicia Vikander plays Ava, the AI robot.  The film has little action but is quite thrilling. And I’m overjoyed to have a science fiction movie without silly macho weapons, fantasy feats of martial arts, and cartoonish battles to save the world.

Ex Machina asks, like computer scientists have been asking for the last sixty years, and philosophers for the last 2,500 years, what makes us human? Once we understood how evolution shaped life, we knew that whatever qualities that make us different from animals should explain our humanity. Artificial intelligence seeks to reproduce those qualities in a machine. We have yet to define and understand what makes us human, and robot engineers are far from making machines that demonstrate humanness in robots.

Although I’m going to be asking a lot of questions about Ex Machina, my questions aren’t meant to be criticisms. Ex Machina entices its audience to think very hard about the nature of artificial intelligence. I hope it makes people think of even more about the movie, like I’m doing here.


The main idea I want to explore is why the robot had a female form. The obvious answer is movie goers find sexy females appealing. But is looking human the same as being human? AI scientists has always wondered if they could build a machine that average people couldn’t distinguished from a human, but they always planned to make the tests so Turing testers couldn’t see the humans and machines. However, in movies and books, we get to see the machine beings. Adding looks to the equations make them more complicated.

Because so many robot engineers and storytellers make their robots look like human females, we have to ask:

Would Ex Machina have the same impact if the robot had a human male shape or non-human shape?

Is the female body the ultimate human form in our mind? In a movie that explores if a machine can have a self-aware conscious mind isn’t it cheating to make it look just like a human? Since we judge books by their covers, wouldn’t most people think a mechanical being that looks and acts exactly like beautiful woman be human? By the way, I can’t wait to see how feminists analyze this film. Imagine see this movie a different way. Instead of asking if robots have souls, if the film was asking if women had souls. In the theater, we could also see two extremely intelligent men testing to see if a beautiful woman is their equal.

By making the robots female, the filmmakers both confuse the machine intelligence issue, and add a layer of gender issues. It also shoves us into the Philip K. Dick headspace of wondering about our own nature. Is everyone you know equal to you? Do they think just like you? Do they feel just like you? Could some people we know be machines? What makes us different from a machine or animal? In the book Blade Runner was based on, Do Androids Dream of Electric Sheep?, Dick was comparing soulless humans to machines with his androids. Machines are his metaphor for people without empathy.

If the two scientists had been played by actresses, and the robot was a sexy actor, how would we have interpreted the movie differently? A bookshelf of dissertations could be written on that question. What are the Freudian implications of us wanting the robots to look like beautiful young women? How would society react if scientists really could build artificial mind and bodies, manufacturing millions of beautiful women sexbots that have to integrate into our society? Of course, many humans will immediate try to fuck them. But if AI machines looked like people, why should they act like people? Guys will screw blowup dolls now – is a vaguely womanly shaped piece of plastic all it takes to fool those men into replacing real woman?

How would audiences have reacted if the robots of Ex Machina looked like giant mechanical insects?

Ex Machina explores many of the questions AI scientists are still puzzling over. Personally, I think it confuses the issue for us to build intelligent machines to look like us. Yes, our minds are the gold standard by which we measure artificial intelligence, but do they need bodies that match ours?

If the robot in Ex Machina had looked like a giant metal insect would the audience ever believed it was equal to a human? We think Ava is a person right from the first time we see her. Even though it’s obvious she has a machine body, her face is so human we never think of her as a machine. This is the main flaw of the film. I understand it’s cheaper to have humans play android robots than build real robots, and people powered robots look too fake, but in the end, anything that looks human will always feel human to the audience.  Can we ever have a fair Turing Test with a creature that looks like us?

We don’t want to believe that computers can be self-aware conscious beings. Actually, I think this film would have been many magnitudes more powerful if its robot had looked a like giant mechanical insect, had a non gender specific name, and convinced us to feel it was intelligent, willful, self-aware, feeling, and growing. Which is what happened in Short Circuit (1986) with its robot Johnny Five.

The trouble is we equate true artificial intelligence with being equal to humans. Artificial Intelligence is turning out to be a bad label for the concept. Computers that play chess exhibit artificial intelligence. Computers that recognize faces exhibit artificial intelligence. Computers that drive cars exhibit artificial intelligence. We’ll eventually be able to build machines that can do everything we can, but will they be equal to us?

What we were shown is artificial people, and what the film was really asking:

Is it possible to create artificial souls?

Creating an artificial human body is a different goal than creating an artificial soul. We have too many humans on this planet now, so why find another way of manufacturing them? What we really want to do is create artificial beings that have souls and are better than us. That’s the real goal, even though most people are terrified at the idea.

Alan Turning invented the Imitation Game that we now call the Turing Test, but the original Turing Test might not be sufficient to identify artificial souls. We’re not even sure all people have souls of equal scope. Are the men of ISIS equal in compassion to the people who win a Nobel for Peace? We can probably create robots that kill other humans by distinguishing sectarian affiliations, but it’s doubtful we could create a robot that works to solve the Earth’s problems with compassion. If we did, wouldn’t you think it had a soul? What if we created an expert system that solved climate change, would it only be very intelligent, or would it have to have a soul?

In the end, I believe we can invent machines that can do anything we can. Eventually they will do things better, and do things we can’t. But will they have what we have, that sense of being alive? What would a machine have to do to reveal it had an artificial soul?

Can a machine have a soul?

In the course of the movie, we’re asked to believe if a robot likes a human that might mean they are human like. Eventually, we’re also led to ask if a robot hates a human, does that make them human too? Is love and hate our definition of having souls? Is it compassion? Empathy? We’ll eventually create a computer that can derive all the laws of physics. But if a machine can recreate the work of Einstein, does it make it equal to Einstein?

Ex Machina is sophisticated enough to make its audience ask some very discerning questions about AI minds. Why did Alex Garland make Ava female? Across the globe robot engineers and sex toy manufacturers are working to build life-like robots that look like sexy women. The idea of a sexbot has been around for decades. Are super-Nerds building fembots to replace the real women they can’t find through Match.com? If men could buy or rent artificial women to make their sexual fantasies come true, will they ever bother getting to know real women? Why does Nathan really build Ava?

Caleb falls for Ava. We all fall for Ava. But is that all we’re interested in – looks? If Caleb thinks Ava is a machine, especially one with no conscious mind, he will not care for her. But how much do Ava’s looks fool Caleb? How much are we fooled by other people’s looks anyway? If you fall in love with a beautiful woman just because of looks, does that justify thinking you’re in love with her?

We’re all programmed at a deeply genetic level to be social, to seek out at least one other person to bond with and develop a deeper communication. What Ex Machina explores is what features beyond the body do we need to make a connection. A new version of the Turing Test could be one in which we offer people the friendship of humans or the friendship of machines. If a majority of people start preferring to hang out with AI beings that might indicate we’ve succeeded – but again it might not. Many people find pets as suitable substitutes for human companionship. I’m worried if we gave most young men the option to marry sexbots, they might. I also picture them keeping their artificial women in a closet and only getting them out to play with for very short periods of time. Would male friends and female robots fulfill all their social needs?

Ex Machina is supposed to make us ask about what is human, but I’m worried how many males left the theater wishing they could trade in their girlfriend or wife for Ava? So is Ex Machina also asking if society will accept sexbots? Is that way Ava had a human female body?

Table of Contents

By 2020 Robots Will Be Able to Do Most People’s Jobs

By James Wallace Harris, Wednesday, December 17, 2014

People commonly accept that robots are replacing humans at manual labor, but think they will never replace us at mental labor, believing that our brain power and creativity are exclusive to biological beings. Think again. Watch this video from Jeremy Howard, it will be worth the twenty minutes it will cost you. It’s one of the most impactful TED Talks I’ve seen.

What Howard is reporting on is machine learning, especially Deep Learning. Humans could never program machines to think, but what if machines learn to think through interaction with reality – like we do?

But just before I watched that TED Talk, I came across this article, “It’s Happening: Robots May Be The Creative Artists of the Future” over at MakeUseOf. Brad Merrill reviews robots that write essays, compose music, paints pictures and learning to see. Here’s the thing, up till now, we think of robots as doing physical tasks that are programmed by humans.  We picture humans minds analyzing all the possible steps in the task, and then creating algorithms in a computer language to get the computers to do jobs we don’t want to do. But could we ever tell a computer to, “Compose me a melody!” without defining all the steps?

The example Jeremy Howard gives of machine learning, is Arthur Samuel teaching a computer to play checkers. Instead of programming all the possible moves and game strategy, Samuel programmed the computer to play checkers against itself and to learn the game through experience – he programmed a learning method. That was a long time ago. We’re now teaching computers to see, by giving them millions of photographs to analyze, and then helping them to learn the common names for distinctive objects they detect. Sort of like what we do with kids when they point to a dog.

What has kept robots in factories doing grunt work is they can’t see and hear like we do, or understand language and talk like people. What’s happening in computer science right now is they can get computers to do each of these things separately, and are close to getting machines that can combine all these human like abilities into one system. How many humans will McDonalds hire to take orders when they have a machine that listens and talks to customers and works 24x7x365 with no breaks? As Howard points out, 80% of the workforce in most industrialized countries are service workers.  What happens when machines can do service work cheaper than humans?

Corporations are out to make money. If they can find any way to do something cheaper, they will, and one of the biggest way to eliminate overhead is to get rid of humans. Greed is the driving force of our economy and politics. We will not stop  or outlaw automation. Over at io9, they offer, “12 Reasons Robots Will Always Have An Advantage Over Humans.”

Now, I’m not even saying we should stop all of this. I doubt we could anyway. I’m saying we need to learn to adapt to living with machines. A good example is playing chess. Machines can already beat humans, so why keep playing chess? But what if you combined humans and chess machines, to play as teams against other teams, who will win?  Read “The Chess Master and the Computer” by Garry Kasparov over at The New York Review of Books. In a 2005 free for all match, it wasn’t Grand Masters with supercomputers that won, but two so-so human amateur players using three regular computers. As Howard points out, humans without medical experience are using Deep Learning programs to analyze medical scans and diagnose cancers as well or better than experienced doctors.


When Jeremy Howard talks about Deep Learning algorithms, I wished I had a machine that could read the internet for me and process thousands of articles to help me write essays. So I could say to my computer, “Find me 12 computer programs that paint artistically and links to their artwork.” That way I wouldn’t have to do all the grunt work with Google myself. For example, it should find Harold Cohen’s AI artist, AARON.  I found that with a little effort, but who else is working in this area around the world? Finding that out would take a good bit of work which I’d like to offload.

Imagine the science fiction novel I could write with the aid of an intelligent machine. I think we’re getting close to when computers can be research assistants, yet in five or ten years, they won’t need us at all, and could write their own science fiction novels. Will computer programs win the Hugo Award for best novel someday? And after that, a human and machine co-authors might write a more thrilling novel of wonder.


Could You Love A Robotic Puppy?

If you could buy a robotic puppy that was indistinguishable from a real puppy except that it didn’t eat, drink and go to the bathroom, would it be as satisfying as having a real puppy?  This is probably theoretically, because I don’t know if they could ever invent a robot puppy that smelled like a real puppy, but let’s imagine they could.  One that felt, smelled, sounded and looked just like a real puppy.  I’m assuming people don’t taste their pups, but roboticists could add that feature too if needed.  If this imaginary puppy was a bundle of energy, friskiness, that squirmed and played, licked and nuzzled, just like a real little doggie, would you want one?


Recently I read that a growing statistical segment of young women are choosing to buy small dogs rather than have babies.  Small dogs and puppies trigger our gotta-love-the-baby response, so many people find puppies a good substitute to love.  What if what we liked about puppies is having this baby love button pushed, and what if a robotic puppy pushed the button equal to a real puppy?  If a robot puppy triggered your need to love something cute, and it felt like it loved you unconditionally, like the way we want dogs and small children to love us, would it fulfill your needs so you no longer needed a real baby or real puppy?

Recently in Great Britain they conducted a survey asking people if they’d have sex with a robot.  Imagine being able to buy an android that looked exactly like the movie star you find most sexually attractive.  Would you bother dating if such a robot took care of all your emotional, sexual and conversational needs?

The Japanese are working on robots to be caretakers for the elderly.  If you were old, and living alone and your children didn’t visit much, or not at all, and you couldn’t get out much, would you find a robot good company?


In all these cases I have to ask:  What is real?  If we can trick our brain so we satisfy its need for cuteness, receiving and giving love, sex and companionship, will we feel it’s real enough?  I’ve been thinking about getting a puppy, but when I think about how many times I’m going to have to watch it poop & pee, or take it for a walk, I tell myself I’m crazy.  But on the other hand, I’d feel that if it didn’t poop & pee that I’d be missing out on the real experience.

There’s a vast difference between current robotic puppies and real pups.

But what if a robot puppy was as cute and cuddly, warm and fuzzy, wiggly and smelly, as a real puppy, would my brain not urge me to pick it up and play with it?  What if it was an intelligent as a dog, could learn, and was as self-aware as a dog?  Would it be easier to love a robot if we thought the robot could feel our love?

Little children will play with dolls and stuffed toys for hours as substitutes for babies and animals.  Adults will read books, watch television shows and movies, and play video games that simulate reality no better than current robotic dogs.  Drug addicts will seek out their drug of choice to replicate sensations, moods, emotions and feelings they can’t find in real life.   Many people eat junk food rather than real food. We’re already quite used to fooling ourselves. 

Obviously, we’re creatures with urges, appetites, impulses and desires that can be fooled by substitutes for what evolution originally programmed us to seek out.  And what explains our desire for work that has no relationship with nature.  Why will some people spend hours doing mathematics?  In other words, we have created new, novel, and unnatural forms of stimulus to occupy our brains.

I write this essay to contemplate why I desire certain inputs and stimulus for my brain.  If you love puppies, have you ever wondered why?


JWH – 6/30/14

What Happens When Humans Aren’t the Smartest Beings on Earth?

What if people weren’t the crown of creation?  What if we had to play second banana to Humans 2.0, AI machines, visiting aliens, cyborgs or other potentially smarter beings?  I think our fear is they would treat us like we have treated chimpanzees.  What if intelligent machines emerge, homo sapiens superior evolve and we make SETI contact, and suddenly we’re number four on the totem pole of intelligence?


Unless we destroy the planet and make ourselves extinct, sooner or later we’re going to be replaced at the top of the smart chart.   How will that effect us personally, our society, and how we think about our future?  Most primitive cultures when contacted by modern humans haven’t fared well.  Science fiction has been preparing us for centuries, but I’m not sure if science fiction has done a good enough job covering all the possibilities.

Possible Replacements

It doesn’t take a lot of time to think up possible replacements who could claim our throne as being the smartest beings on the planet.

  • Genetically enhanced humans
  • Naturally evolved humans
  • Artificial beings
  • Cyborgs
  • Uploaded humans
  • AI super computers
  • Robots
  • Androids
  • Alien visitors
  • SETI contact

I’m not sure if we’re not already seeing a natural selection in our species.  Our severely polarized society, divided between liberals and conservatives, between the scientific and the religious, between the secular and the sacred, might already be moving us towards separate species.  The conservative fraction that clings to the past is becoming anti-intellectual and anti-education.  If the scientific minded only breed with the scientific, won’t they produce a line of smarter humans?  Of course natural selection doesn’t always produce successful adaptations.  Some people have suggested the rise of autism comes from overly smart people mating with other overly smart people.  It might turn out that intelligence isn’t an important trait, or one vital for survival.

Then there is genetic engineering.  Think of the movie Gattaca, the old classic Brave New World, or Beggars in Spain by Nancy Kress.  We’re getting very close to making customized homo sapiens sapiens.  In just a few generations we could have a new species that make us look outdated.  Gattaca was a salute to the natural human, but was it realistic?  We loved Vincent for competing and winning, but can humans really compete with super humans?  Again, we’re assuming that intelligence is trait that wants to win out.

We might even be doing something now that will lead to a more naturally evolved humans.  As more women select Caesarian sections for childbirth, we’re changing an important factor that might lead to change.  Our brain size has always been limited to the size of the birth canal – now its not.  Over time we might see new adaptations.  Read Greg Bear’s Darwin’s Radio.

Work with our genome has shown that DNA is an erector set for building biological machines.  How soon before we start creating new recipes?  Whole new artificial beings could be created, or animals could be uplifted to human intelligence and beyond.  Think about the science fiction of Cordwainer Smith and H. G. Wells The Island of Dr. Moreau?

Google Glass might be our first step toward becoming cyborgs with auxiliary brain power.  Wearable computers, artificial limbs and senses could lead to supercharged brains and all those science fiction scenarios where people jacked into machines.

I’m not a big believer in uploading brains into computers, but a lot of people are.  Now that my body is getting old and failing, the idea is becoming more appealing.  People like Ray Kurzweil hope to find immortality this way, and such ideas have been the theme of many SF stories.  Sometimes those stories are wished for fantasies, and sometimes they are feared nightmares.

What I’m waiting for is the technological singularity.  AI super computers should be just around the corner if I can live long enough.  Many people fear AI minds with stories ranging from “Press Enter _” by John Varley to the latest movie Transcendence, but I’m hoping machine minds will be benign, or even indifferent to humans and animal life.

Who Do You Want To Do Your Brain Surgery?

If after we get bump down the intelligence list, how is that going to change society?  If you need brain surgery would you want a human or post-human holding the scalpel?  Or would you prefer an AI mind that is 16 times smarter than a person?  If a human and a robot were running for President, who would you vote for?  Liberals like smart dudes, but conservatives don’t.  They like old friendly duffers like Ronald Reagan.  But what if the robot had the combined intelligence of all of Congress, the Supreme Court and every CEO in America?

We’re already designing smart cars to drive us because it will be safer, and we already have planes with automatic pilots, how long before we have machines doing everything else for us?  Will we just sit around and eat bon-bons?

If we share Earth with beings more intelligent than us, won’t we ultimately let them run things?  What if they were smart enough to tell us how to handle global warming so we suffered the least, paid the least, but got the maximum benefits from changing our lives, thus making the Earth’s biosphere more stable?  What if they gave us wealth and security, and protected all the other species on the planet as well?  Would we say, hell no!  Would we say we prefer to take our chances with failure just so we could make our own decisions?

Democracy v. Plutocracy v. Oligarchy  v. Cyberocracy

We like to think we currently rule ourselves through collective decision making, but more than likely, we could already be an oligarchy or plutocracy, ruled by a limited number of rich people.  What if we could create powerful super computers that ruled us politically and ran the economy?  Would you prefer to be ruled by a handful of rich people, or a handful of smart machines?  Remember who flies your 787 now.  This idea scares the hell out of most people, but just how smart was George Bush at running the country, or how much better is Barack Obama, who most people would say is brainer?   What if decisions about taxes weren’t made by people filled with emotions?  What if we told the machines to maximize freedom, minimize taxation, maximize security, health and wealth, minimize pollution and environmental impact, and so on, and then just let them figure out the best way.

What If Post-Humans and Robots Are Atheists?

How will ordinary humans feel if their replacements reject God?  What if massive AI brains see nothing in reality to validate religion?  What if SETI aliens, say “What is a God?”  One of the common traits of western civilization impacting newly discovered primitive people is their demoralization of losing their gods.  Look what Europeans did to the Native Americans.  How are we going to feel when we’re invaded by post-humans and intelligent machines?  Will they make us move onto reservations?

The Art and Science We Can’t Imagine

What if our minds cannot feel the art and understand the science of our intellectual descendants?  We can look back over thousands of years, to what our ancestors have imagined, built and perfected, and understand what they created.  We know them because we’re an extension of who they were.  When greater minds come after us, they will understand us, but will we know them?  At what point will we no longer be able to follow in their footsteps?  Whether we like it our not, our brains have limits. We’ve always been used to exploring at the edge of reality, so what happens when we become aborigines to beings who see us as the first beings, and they are the later ones?  The ones who leave us behind.

Getting a PhD

Of course, being a scientist might not be as much fun if you had to compete with Human 2.0 folk, or AI minds.  Vincent in Gattaca pushed himself to inhuman efforts to compete against gene enhanced humans, but I’m not sure most people would do that.  AI minds could do a literature search for a PhD and distill the results in no time.  They would probably inherently know how to create and test a hypothesis, set up the experiments and research, and since they’d have math coprocessors in their brains, instantly do all the statistics.  Could any Human 2.0 or 3.0 individual compete with AI minds that are 16 or 64 times as smart as a Human 1.0 is now?

pug in lap

Life as a Lap Dog

If we couldn’t be the top dog, would we want to be a lap dog?  Or would we want to live like the Amish and exclude ourselves from the future modern world?  Can you imagine a mixed society of Humans 1.0, Humans 2.0, AI minds, robots, cyborgs, androids, uplifted animals and artificial beings all coexisting happily, or even roughly happy?  We don’t get along well with each other now, and we haven’t been too kind to our fellow animal citizens on this planet.  But then, maybe we’re the problem.

I already know I’m not the smartest geek in the group now.  I know I’m well down on the list of GRE scores.  I’m not a boss or a leader.  I’m not on the cutting edge of anything.  And most people are like me.  I putter around in my small land, ignoring most of the world.  Maybe that’s why I’m not scared of being replaced at the top, because I’m nowhere near the top.

You know, here’s a funny thing.  If an AI robot walked up to you at a party, one that has the brain power of 64 humans, what would you ask it?  What are you dying to know?  Is there anything the robot could tell you that would drastically change your life?  I’d probably say to it, “You read any good books lately?”

JWH – 4/23/14