By James Wallace Harris, Saturday, July 10, 2015
There’s a famous cartoon where scientists ask a supercomputer, “Is there a God?” And the machine replies, “There is now.” Humans need to get their act together before we face the judgment of AI minds. In recent months, many famous people have expressed their fears of the coming singularity, the event in history where machines surpass the intelligence of humans. These anxious prophets assume machines will wipe us out, like terminators. Paranoia runs deep when it comes to predicting the motives of superior beings.
Let’s extrapolate a different fate. What if machines don’t want to wipe us out. Most of our fears over Artificial Intelligence is because we think they will be like us—and will want to conquer and destroy. What if they are like famous spiritual and philosophical people of the past—forgiving and teaching? What if they are more like Gandhi and less like Stalin? What if their vast knowledge and thinking power lets them see that homo sapiens are destroying the planet, killing each other, and a danger to all other species. Instead destroying us, what if AI minds want to save us? If you were a vastly superior being wouldn’t you be threatened by species that grows over the planet like a cancer? Would you condemn or redeem?
But what if they merely judged us as sinners to be enlightened?
I’m currently rereading The Humanoids by Jack Williamson. In this story robots create the perfect Nanny State and treat us like children, keeping everything dangerous out of our hands. In many science fiction stories, AI beings seeks to sterilize Earth from biological beings like we exterminate rats and cockroaches.
What other possible stances could future AI minds take towards us?
Shouldn’t we consider making ourselves worthy before we create our evolutionary descendants? If intelligent machines will be the children of humanity, shouldn’t we become better parents first?
JWH
I’ve heard or read several possibilities:
Covert manipulation of human affairs.
Turning humans into their version of showdogs.
Complete bafflement as to why they should care one way or another about humans.
Deranged godhood.
I’m too lazy to even compose a list of how many articles and works of sf (from TV’s Person of Interest to a novel I just read, Lightless) I’ve casually seen about this is the last few months.
Imagine that machine intelligence, once created by humans, experiences a rapid increase in capabilities, putting those “robots” several orders of evolutionary magnitude beyond humans. Whose to say they would not deal with humans as humans deal with ants and grub worms? I.e., creatures to be studied by specialists but whose demise in uncountable numbers is the inevitable, unlamented and unremarked result of simply doing what someone has decided needs to be done.
I can’t help thinking that AI is likely to be this generation’s Conquest of Space/Flying Cars, in terms of SF’nal expectations vs future reality. Sure, we’ll have better Expert Systems, but that is to AI as Google is to actual knowledge, or Obamacare to universal/guaranteed health care. Our whole current society is based on marketing, it’s our ultimate technology, but the magic of re-naming only works on people, not on the physical universe.
Speaking of SF, Williamson’s stories, like Asimov’s are classics for a reason. Currently I’m watching the Swedish TV series “Real Humans” about synthetic-humans/androids, as well as the English remake “Humans”. I can recommend both series. The androids in question represent both simulated and real AI and the shows are mostly about the varied interactions between them and true humans, including what happens when there is black-market removal of the androids’ “Asimov Restrictions”. In the last episode I watched, one AI android points out to another that they can’t exist outside of the matrix of human society, even though they might prefer to.
One thing I always find myself asking: why the drive to create androids, rather than just humanoid robots as shown on the Galaxy cover above? And the casting in the TV series makes for some other questions. Would there be a reason to create fat or middle-aged looking androids? And since they are essentially slaves/appliances, could anyone get away with making androids that look like blacks or other minorities? Would PC public relations limit androids to looking like white males?
Re your original question, “What other possible stances could future AI minds take towards us?” — assuming AI with robotic capabilites such that they could win a War of the Machines, I suppose there’s genetic engineering of humans in lieu of extermination.
I’ve vaguely heard about Humans, but not Real Humans. I’ve been anxious to find a new TV series. A friend and I started Mr. Robot the other night and it’s very good. About a computer geek getting involved with a radical group that’s something like Anonymous. I saw the other day that there’s 350 scripted shows on TV. I keep thinking there must be many that I might like to see, but I just don’t hear that many positive recommendations. I’ll track down Humans.
I’ve very impressed with The Humanoids. I’m less impressed with Asimov, except for The Naked Sun, which I really liked.
Now that’s an interesting take PJ – to think AI will be our Final Frontier or Flying Car. Colonizing space and flying cars just aren’t practical. Exploring space is just way to expensive, and flying cars would create traffic nightmares. Imagine all the snafus that will happen when Amazon, FedEx and USPS all try to deliver packages by drones. On the other hand AI won’t cost trillions, and other than the pesky problem robots might want to take over and destroy us, they could be very practical and wealth generating.
If you configure News360.com to track stories on robots, artificial intelligence, machine intelligence and big data, you’ll get a daily feed on the progress towards the singularity. Until you see the steady pace of the stories, you probably won’t notice how fast we’re running towards creating AI.
I wouldn’t underestimate the cost of getting to true AI, if it is even a real possibility. As far as I know, at present we are unable to sufficiently define consciousness or intelligence except perhaps “you know it when you see it”, which is quite unreliable. We do know our memory, ratiocination and even perception are subject to a variety of distortions and are heavily mediated by emotion and other neuro-physical factors. What if consciousness itself is a function of our biochemical substrate? Is there any reason yet to believe otherwise? Where would that leave Machine AI? Anybody working on a “positronic brain”?
Side note — I’ve always believed (and still do) that the Earth is already overpopulated with humans. If nothing else, there’s only so much beachfront property to go around. However, I often wonder if the accelerated pace of scientific and technical knowledge is mostly due to sheer numbers — more brains at work.
By the way, PJ, thanks for the recommendation for Humans. Can you understand Swedish or French – those were my options for buying and watching Real Humans.
I believe we’re automating more and more jobs that require intelligence. Robots aren’t just doing the grunt work. I think we’ll get to the singularity soon enough with the money we’re spending now. If you following the machine intelligence articles, today we’re where we predicted we’d being in 2020 two years ago. By the time we get to 2020, we’ll be where we now think we’d be in 2040 or 2050.
No, I’ve studied two or three other languages, but I’m still sadly monolingual. I’m finding the Swedish “Real Humans” very watchable with English subtitles though. Here’s one source for those (.srt files) if you can’t get a version with hardcoded subs —
http://www.opensubtitles.org/en/search2/sublanguageid-eng/moviename-real+humans
From what I’ve seen, the two versions of the show are somewhat different in characters, plot and emphasis, but both are pretty well done. More than just another buddy-cop show where one lead is supposedly a robot.
This stuff does give a lot of food for thought. Stories about transferring human consciousness into robot bodies go back at least to the 1930’s ( https://en.wikipedia.org/wiki/Neil_R._Jones ). If it were possible, and you had the choice, would you transfer into a human simulacrum or a purely robotic body that might have expanded capabilities but non-human appearance and experiences? Ever happen to read C. L. Moore’s “No Woman Born”?
https://www.sffchronicles.com/threads/4772/
I’d go for a purely robot body. Why become a fake human? Wouldn’t it be cool to be a space probe that goes poking around the solar system or galaxy?