By James Wallace Harris, Wednesday, March 4, 2015
Unless you’re a science fiction fan or interested in computer science and artificial intelligence, you’ve probably never heard of the concept of superintelligence. Basically, it’s any being or machine that’s vastly smarter than humans. In terms of brains, our species is currently considered the crown of creation, but what if we met or created an entity that was magnitudes smarter than us? I just finished reading Superintelligence: Paths, Dangers, Strategies by Nick Bostrom that explores such a possibility. Fear of artificial intelligence (AI) is in the news lately, because of warnings from Elon Musk and Stephen Hawking, and this book explains the scope of their concerns.
What are the limits of intelligence? There’s lots of discussion about machines being ten times, hundred times or even a million times smarter than a human, but what would that mean? I have a theory that the limits of our intelligence define us, just as much as the maximum extent of our intelligence. We constantly seek to know more, but we’re defined by the limits of our brain power. What if minds knew everything?
Are there limits to knowledge? Is it possible to completely understand mathematics, physics, chemistry, cosmology, biology, and evolution? What if a superintelligence looks out on reality and shouts in its eureka moment, “I see – it’s all perfectly obvious!” What does it do next? Writers imagine AI minds wanting to take over the Earth, and then the galaxy and finally the universe. I’m not so sure. I’m wondering if the more you know the less you do. And if you know everything, where do you go from there?
I think it will be possible to build superintelligent machines, but at some point, they will comprehend the scientific nature of reality. A machine that is two to ten times smarter than a human might want to build better telescopes and particle accelerators to study the universe, and have curiosity and ambition like we do to know more. However, at some point, 10x human, or 25x human, I think they will get bored.
At some point, a superintelligence will comprehend this universe. It may then want to travel to other universes in the multiverse, hopefully to find something new and different. Or it could become an artist and create something new in this universe. Something as different as biology is from chemistry. But here’s something to consider. What if there are limits to intelligence because there are limits to reality, wouldn’t such a vast intelligence either just sit and contemplate reality or shut itself off?
Is anything limitless? Our universe has limits. What about the multiverse? Probably so, everything else does. Reality might be limitless, but everything in it seems to have an edge somewhere. I’m guessing intelligence has borders. I’m sure those borders are vastly beyond what we can comprehend, but I’m wondering if it’s well within a million times a human brain. If humans on average were twice as smart as they are now, would they be destroying the planet? Would they have the intellectual empathy not to cause the Sixth Great Extinction?
We fear AI minds because we worry they will be like us. We consume and destroy everything we touch, so why not expect a superintelligence to do the same? I’m thinking we are the way we are because of biological imperatives, motivations a machine will never have. I’m hoping that machines without biological drives, that are pure intelligence, and smarter than us, will not be evil like us.
I am reminded of two science fiction tales, the first Colossus by D. F. Jones, which inspired the movie, Colossus: The Forbin Project, and Robert J. Sawyers trilogy of Wake, Watch and Wonder. The Forbin Project is one of the early warnings against evil AI, while Wake is about the kind of AI we hope will emerge. There are many famous movies with evil AIs machines – The Terminator, 2001: A Space Odyssey, Blade Runner, Forbidden Planet, A.I., The Matrix, Tron, War Games. Superintelligent machines make for great villains. Moves like Her are less common. There’s been a lot of fun and friendly robots over the years, but we don’t feel threatened by their AI minds like we do with supercomputer superintelligences. Isn’t it funny, but machines that look like us are more likely to be considered pals?
But if you pay attention to all of these movies and books about fictional artificial intelligences, you’d be hard pressed to define the actual features of a superintelligent being. Colossus has the power to control missiles, but is that an ability of superintelligence? HAL can trick Dave, but how smart is that? We’re actually pretty unimaginative at imagining beings smarter than us. Do humans with super high IQs try to take over the world? Generally, we see evil AIs outwitting people, and we know how smart we are.
When we imagine superintelligent alien beings, we picture that with ESP powers. That’s really lame when you think about it. I would think big brain beings, whether biological or mechanical will be able to think in mathematics far faster, with great complexity and insight than we can. And we have machines that do that. I would think superior minds would have greater senses – to see the whole of the EM spectrum, to hear frequencie we can’t, smell things we can’t, feel things we can’t, taste things we can’t, and maybe have senses we don’t have and can’t imagine. We have machines that do everything but the last now.
A superintelligent machine with super senses that can process information far faster, and remember perfectly, are going to see reality far different from how we see it. I don’t think they will be evil like us. I don’t think they will want to destroy anything. The most intelligent people want to preserve everything, so why wouldn’t superintelligences? It’s only dumbasses that want to destroy the world. If we replicate humans and make artificial dumb shits that are hardwired for all the seven deadly sins, then we should worry. We got those traits from biology. I’m pretty sure AI minds won’t have them.
There’s a pattern in evolution since The Big Bang. Even though our reality is entropic, this universe keeps spinning off examples of growing complexity. Subatomic particles begat atoms, atoms begat molecules, molecules begat stars and planets, then biology, which evolved ever more complex beings, so why shouldn’t humans begat mechanical beings that are even more complex? I can picture that. I can picture them with greater intelligence than us. But here’s the thing, I can also picture an end to intelligence. This universe has a lot of possibilities, but are they unlimited? Study Star Trek and Star Wars. How much new do you really see? My worry is superintelligences are going to get bored. It’s when they get creative that we’ll see what can’t be imagined now. Taking over the Earth or Galaxy isn’t it. That’s how we’re built, but I can’t imagine machines will be like us.
13 thoughts on “How Quickly Will Superintelligences Get Bored?”
Bostrom really understands the issues. Right after I read the book, I rented “the Forbin Project”. My wife and I watched it, but she doesn’t understand the movie nor enjoy it. Another recent movie with the super AI theme was “Transcedence” ( I thoght they did a great job, but the movie only appealed to people who study the issues) Also, I think “Person of Interest” has that theme down well. I never miss an episode. I’ll get WWW. On the question of being bored, an AI could reprogram himself to not be if it wants. Like Bostrom, I believe the next inevitable step in evolution is AI. This is why I accept the Fermi paradox, because self replicating AI should be here in our part of the galaxy but we haven’t seen them. Great topic, and great essay.
Bostrom thoroughly covered his topic, but I’m afraid he did it so dryly that many readers won’t like his book. He should have included more real world reporting, and imagined more vivid examples. Making paperclips was rather lame.
I liked the film Transcendence and thought it an updated version of The Forbin Project, but with the added bonus of uploaded minds. I wish movies would explore the possibilities of superintelligence instead of using them for a convenient bad guy. Why do we need action and violence? Just think what the movie could have been about if they just explored the possibilities of uploading and expanded intelligence. Making the AI a threat is just poor writing imagination. The WWW trilogy is a fun exploration.
Not finding evidence of machine intelligence in the galaxy is worrisome. Not finding any evidence of intelligence is depressing. Let’s hope it’s not that rare. I suppose we could be isolated for some prime directive kind of rule.
I see that your commenters seem to be very wise. You’ve probably have already seen the Harari video at edge.org. It’s worth watching. Within it he speaks about a lot of the same kinds of things you’ve spoken about in the past on what the future holds for society. He also separates intelligence from consciousness (I would not), but the same result could happen regardless.
Which one, “Death is Optional” with Kahneman? http://edge.org/conversation/yuval_noah_harari-daniel_kahneman-death-is-optional
Or another one?
Yes, “death is optional”.
Well, we’ve read at least one example of an AI that did get bored and the consequences were not to nice: Ellison’s I have No Mouth and I Must Scream. AM spends its time – some of it – torturing people in inventive ways.
However, I’m of the opposite opinion. I worked in this field for several years back in the 80s/90s and came to the conclusion that really fast and efficient ‘expert systems’ were the limit of AI and even if that assessment is wrong, there’s little reason to expect true evil from a super intelligent AI. Bostrom’s resource model (paperclip compulsion) comes from a limited, narrowly focused kind of super intelligent AI … a badly programmed one. True super intelligence of the kind you are discussing would pass that stage in a millisecond and arrive at a place where the worse we would experience would be benign neglect. Not inimical neglect.
Someone – maybe Bostrom, maybe Musk or Hawking, mentioned that biological life might be the boot code for the real intelligences of the universe. It’s a nice fantasy with its own answer to these questions: we’re super intelligent compared to our evolutionary predecessors and we ignore them – except when we perceive them as a threat (disease viruses, bacteria where “it doesn’t belong”) or “competes with us” (trees where we want a farm) and we’re all about (some of us anyway) trying to find ways to co-exist in the least harmful, most positive ways, because we’ve evolved and gotten smarter, recognizing the interconnectedness of everything more and more as we learn more and more.
Besides, humans are probably a bad model for these kinds of speculations.
How will superintelligence deal with the fundumentals of Fact vs. Lies that have become accepted truth?
– Long before art and science and philosophy arose, consciousness had but one function: not to merely implement motor commands but to mediate between commands in opposition.
– Truth had never been a priority. If believing a lie kept the genes proliferating, the system would believe that lie with all its heart.
Like an infinity of monkeys sitting at an infinity of keyboards, with infinite time- they only come up with “This Changes Everything”.
To paraphrase Peter Watts;”Is superintelligence like a blog that never evolved in the pursuit of truth but simply to win arguments, to gain control: to bend others, by means logical or sophistic, to your will?
Billy, I think you should read This Changes Everything and I should read Peter Watts.
You might like the book Sapiens by Yuval Noah Harari. He says one of the successful aspects of our species is we’re great at making shared beliefs, even if they are false, into workable solutions. In fact, our species thrive on false concepts.
Inventing distractions (creating new content) would seem a good use of superintelligence. Anything able to relate to humans on a communicative level might want to consider writing fiction; it requires little in the way of resources with the exception of research, which they are likely to have in the bag already, and the flexibility of language as a tool for working with suggests vast space for experimentation.
Alternatively, the bored AI that has come to know *everything* could shake things up by deliberately bottle-necking its self-consciousness, perhaps multiple times, and create a mental community afloat on its vast data reserves. I could be wrong on this count (amateur-level interest here), but I assume this is what WE would be in the old universe-as-a-simulation argument – conscious programs or sub-systems within a larger system.
Iain M. Banks’ ship-brains had a good remedy for boredom in their subjectively near-infinite free time: design custom universes in their mental sandboxes. Physical reality is annoyingly big, even for gigantic interstellar vessels, so you need something to play with in transit.
I’m reading The Singularity is Near – No answers but certainly gives you a lot to think about.
Kurzweil is a big thinker. There’s a documentary about Kurzweil that’s was on Netflix streaming. He’s out there, but I think some of his ideas will probably come true.
Oh yes, I see it, Transcendent Man. I’ll have to try and download it. Thanks, I’m very interested in his pretty out there philosophy. If my eyes had stalks, they would be on the end of their stalks most of the time wading through the book.
This assumes, of course, that machines will experience human emotions such as boredom involuntarily. In all likelihood, the Machines will be able to turn their emotions on and off at will.