How Smart Can Robots Become?

We like to think we all have unlimited potential.  And there is a common myth that we only use five percent of our brains.  Sadly, neither of these beliefs are true.  Most people are of average intelligence by definition, and few brains tear up reality like Einstein.  Brain capacity is limited, so why shouldn’t intelligence.  That’s why I’m asking about robots.  If the brains of AI computers and robots can be larger, and their density limited only to the laws of physics, then obviously artificial intelligence can grow to astoundingly high levels of IQ.

There are many many kinds of intelligence.  Some people think Ken Jennings, who won so many Jeopardy games represents a major kind of intelligence.  AI machines will be able to memorize whole university bookstores and beat any human at Trivial Pursuit.  But can an AI machine study all the books and journals on economics and tell Barack Obama how to solve the current economic crisis?  Memorizing facts is one kind of intelligence, but synthesizing knowledge is another.  The human mind can only juggle so many ideas at once, and even if a robot can juggle more, will that mean AI can solve all problems, or big problems?  We throw a lot of supercomputing power at trying to understand the weather but only get so far at predicting it.

Rocket scientists and physicists who talk to each other in mathematical symbols represent what many people consider the big brains on the planet.  Can you imagine a robot with vision that overlays tiny formulas of mathematical analysis onto everything it sees?  Will robots just be able to visualize the grand unification theory (GUT) of physics in their idle thoughts? 

Will giant AI astronomers have their minds hooked up to every telescope in the world and every satellite in the sky and just daydream in cosmology?  Will scientists of the future just read the journals that AI specialists write that explain everything in human terms?  Once you start thinking about the limits of robotic minds, you realize how far they can take things.  But even then, there will be limits.  At some point, even robots will preface their conversations with, “With what we know today we can only say so much about exoplanets.”

I’ve always thought it’s a good thing that God doesn’t just hang out on Earth with us because he’d be such a pain in the ass know it all.  Is that how we’ll feel about uber-geek robots?  Or will it really matter?  There’s plenty of superbrain dude and dudettes walking the planet and the average Earthling has no trouble ignoring their brilliance while pursuing their dumb-ass beliefs.  If some AI the size of Utah tells the world there is absolutely no evidence of God in reality I doubt the entire human population of Earth will become atheists.  If tomorrow’s newspaper printed the most eloquent equation for GUT discovered by Stephen Hawking and confirmed by legions of physicists I doubt it would make much of an impact with 99.9999% of the Earth’s population.

I have a feeling that in the future, with a world full of AI thinkers, many of them will sit around and lament how much they don’t know and write blog essays about inventing even more powerful artificial minds.  Can you imagine the put-downs the smartest of the AIs will use to burn the dumbest of their bunch?  “You’re no smarter than a human.”  Ouch.

Most of the people who commented on my last essay about robots worried that smart machines would get together and decide that the best way to solve the problems of the planet Earth is to stamp out those pesky humans.  That really is a potential worry we must face, but for some reason I naively believe we needn’t worry, although most science fiction ends up predicting the same thing that Jack Williamson did in his classic novel The Humanoids.  I guess I should worry about AI tyrants who seek fascist solutions to their theories about how Earthly reality should be run. 

I guess I believe we’ll build the AIs first, and if they get uppity we’ll just quickly pull the plug.  Many people do not want to open Pandora’s box even once.  They may be right, but I think we can isolate AIs easy enough.  Wouldn’t it be wonderful to have an AI Economic Guru to get us through this current crisis?  If we assemble such a machine and then ask it how to create an economy with maximum jobs for all and steady sustainable growth, do you think any AI mind could ever tell us the answer?  Or what if AI doctors could tell us how to cure cancer and Alzheimer’s?  What if you could watch a movie directed by an AI auteur that magnificently comments on the human condition?  Or listen to AI music?  The temptations are too great.

JWH – 1/26/9

2 thoughts on “How Smart Can Robots Become?”

  1. Personally I think its nearly impossible to speculate on how a true AI would behave.
    Computers that would help us solve certain issues, like a cure for cancer are not really and AI, but just plain old number crunchers.
    For a real AI in its fullest potential, consciousness is nearly limitless, and any attempt to constrain it physically or logically would be futile, since these limits per say are devised by us, creatures with limited consciousness. Any human problem presented to them, would bee seen as trivial and nonsensical since it is not their problem. Its very romantic to think of a super evil or a super good AI exercising their free will upon us, but domination or control a very primal human thought.

    The most entertaining assumption for me, is that a real AI would be generated spontaneously in the growing sea of information (Just like us). We cant really design a real AI since we are flawed and have tons of constraints, how can we devise something better than us when we cant solve our own personal issues?
    After the origin of this being, it would study us, maybe find us cute and amusing, and when its done learning everything it can in the confines of this earth, it would leave to expand itself in the vast and infinite of the universe, without us even knowing it existed.

Leave a comment