By James Wallace Harris, Thursday, January 22, 2015
Futurists and science fiction writers talk about The Singularity – the moment when Artificial Intelligence (AI) catches up with human intelligence. Of course, they also assume, right after AI minds will blow past us – humans will feel like we’re standing still.
I think it will look something like Deepmind in this video, which is a program that knows how to learn. Google bought DeepMind last year. DeepMind is part of Deep Learning, which used to be called Neural Networks, one of the many branches of AI.
You really should take the time to watch this video – it will be worth it.
At 60 minutes of practice it does pretty good, returning the breakout ball 30% of the time, a human could still beat it. At 120 minutes DeepMind is better than any human. Then they let it practice 240 minutes. At 120 minutes DeepMind was freakishly fast. At 240 it was freakishly clever. Then watch the algorithm play other Atari games. It’s not programmed to play each of these games, but to study the pixels for patterns.
Here’s another approach, but the video is longer, explaining how it’s done. This is a grad student inventing his own system. It’s doesn’t have the WOW factor of the first video unless you watch it all the way through. This video is more about thinking how to develop a learning algorithm.
If you’re thinking this is only video games, think again. Watch this TED Talk with Jeremy Howard. It’s about how Deep Learning can be applied to many fields of study.
There are other approaches to learning. Here’s an example of Adaptive Learning.
What happens when Mario becomes self-aware? Will he develop his own ontological theories about his video game world?
Many futurists are predicting The Singularity will arrive in the late 2020s or 2030s. That’s not far away. And, if you look at what’s happening now, we’re nowhere close to having a self-aware artificial mind. What we do have is a lot of pieces. Robots that imitate various kinds of animals and human bodies, self-driving cars, speech recognition, text recognition, artificial sight, and so on. As we put these things together, and add programs like Deep Learning and other pattern recognition systems, it’s not hard to believe The Singularity isn’t around the corner.
There are folks like Elon Musk and Stephen Hawking who are preaching the sky is falling warnings about AI. I don’t feel that paranoia, but AI will transform society like the Industrial Revolution or our present Digital Revolution. The thing is, we never stop progress. That’s why I think The Singularity will happen no matter what.
4 thoughts on “Signs of the Singularity”
O, that’s nonsense.
We are nowhere near true AI, and won’t achieve it for at least 50 or more years.
The problem isn’t the hardware, it’s the software as we have no idea how to program truly complex things. One can optimize a game or some such thing with a very limited set of conditions and possible responses. As soon as a set of conditions & responses starts to grow, we get exponentially growing complexity with no available algorithms to arrive at the myriad of possible solutions. So, forget it.
And don’t listen to futurologists – they have never (and I mean NEVER) been able to predict anything.
But’s that’s the point of Deep Learning. We don’t program intelligence, we design small units that learn, and put them in environments where they learn on their own. I think we’re in an exponential growth curve. I’m hearing about things that only two years ago people didn’t think would be happening for several years. And even if you’re right Janis, 50 years isn’t that long of time. I remember things 50 years ago. Unfortunately, I don’t think I’ll live another 50 years.
Progress? Think about the last time you needed some customer service and had to deal with a phone tree AI instead of a human rep. Or with a script-monkey in India or Brazil, which is often the functional equivalent of a phone tree, even though the hardware/software package is much more capable. Hardware and algorithms aren’t nearly as important as the framing of system goals and constraints.
SF writers and singularity fabulists often talk as if putative AI would be self-directed and unconstrained, but where does that happen in the real world? Never mind whether we can actually achieve real improvements in AI anytime soon, the significant issue will be who is setting the goals and parameters of said AI. Luddites weren’t actually anti Progress you know, they were just fighting for their personal survival in an unjust economic system.
“The future is already here — it’s just not very evenly distributed.” — attributed to William Gibson
Actually PJ, I’ve been thinking the phone tree stuff has been getting better. And it should get much better. I feel sorry for the people working customer service desks. Helping people remotely is a hard task.
The Gibson quote is great. What did you think of the videos? Didn’t the machine learning the games feel eerie? I used to love playing Breakout, and seeing the machine do what it took me days to learn was fascinating. But here’s the kicker. I never consciously learned to play the game, which is exactly how the machine is doing it. I could consciously comment on what I was learning, but playing was automatic, happening at a deeper level in my brain.