Should We Give Our Jobs to Robots?

By James Wallace Harris, Wednesday, December 9, 2015

If you use the self-service checkout machines at grocery stores, you have effectively voted to give jobs to robots rather than people. We’ve been slowly passing our livelihoods to machines for decades. Guys used to pump our gas. Computers used to be women working at desks doing calculations. We poke ATM machines rather than chat with bank tellers. Taxes were prepared by accountants and bookkeepers, not programs. We bought music and books from clerks in stores. We used to have repairmen heal our gadgets, now we toss them as soon as they break, and just buy cheaper replacements. We purchase the mass produced rather than the hand-crafted. Our factories used to employ millions, but capital moves manufacturing anywhere in the world where labor is cheapest. Their next step is to automate those factories and get rid of the cheapest workers. Even the fast food worker, the starter job for kids and the fallback for the unemployed, are about to be taken over by robots. Robots have begun to do the work of professionals, like lawyers and doctors, and they are getting smarter every day.

Most of us ignore all these trends because we focus on our personal lives. It would be wise if you are planning your career, or living off retirement savings, to read Rise of the Robots: Technology and the Threat of a Jobless Future by Martin Ford. Automation is a disruptive technology that will impact jobs and savings. The book careful details what’s been happening in the past, and warns of what will happen in the near future.

The Rise of the Robots - Martin Ford

Every day we decide to hire robots through our purchases. Every day we choose robots over people when we buy the cheapest products. Every day we side with capital over workers when we attack unions. Real wages have been dropping since the 1970s. Average household income has only keep par with falling middle-class earnings by having two incomes. Many individuals work two jobs to keep up. The biggest employment sector is the service economy, which generally pays close to the minimum wage. There are two movements to watch. One, to raise the minimum wage to $15 an hour, which benefits labor. The other, is to create robots to do those jobs, that benefits capital. Who will get those jobs in the future: humans or robots? If capital gets its way, it will be machines because you want the cheapest hamburger and fries you can get.

Even though most people in the U.S. are labor, the vast majority sides with capital. For centuries there’s been two forces at play where humans make their living: labor and capital. To understand this read Capital in the Twenty-First Century by Thomas Piketty, a very readable history. Anyone who wants to understand money and savings should read this book. There’s always been a balance between workers and investors. Investors can’t create industries without labor, so labor had a leverage in getting a fair share of the wealth. That leverage has weakened since automation. Capital is about to eliminate most labor costs by buying robots. And we’re letting them. Almost all wealth comes from consumers, and that’s a kind of voting block.

We accept automation and robots buy buying goods and services made by machines. We do this because we want everything on the cheap. To understand where our natural drive for cheapness is leading us, read Cheap: The High Cost of Discount Culture by Ellen Ruppel Shell. We’ve been voting to eliminate people from their jobs since the development of the self-service grocery store.

Like climate change, overpopulation, mass extinction, wealth inequality and all the other major problems we face, we are the cause, and have chosen our path even though we refuse to look where we’re going. We are giving our jobs to C-3PO. It’s a decision we’re making, although most people don’t know it.

To better understand what I’m saying, read these three books. All are easy to read, and entertaining in their presentation of history and facts. We need to stop wasting so much time in escapist entertainment and look around to what’s coming. I’m a lifelong science fiction, and was a computer programmer. I love robots and artificial intelligence. I want us to invent far-out robots that do things humans can’t do, but I don’t want robots taking jobs that humans can do, and need to do.

Civilization is breaking down in countries around the world where young people have no jobs and few prospects. It’s the cause of terrorism. A stable society needs to have most people working, even at jobs a machine could do.

Essay #988 –  Table of Contents

Our Fantasy For Interstellar Travel is Dying

For over 50 years I’ve been reading science fiction hoping humanity will someday travel to the stars and settle other planets. Obvious other people do too, just witness the frenzy behind the new Star Wars movie, which opens on the 18th. Galactic empire stories are the new locale for big sword and sorcery epics. (Isn’t it bizarre that both are enamored with aristocracy?) What deep rooted drive makes us want to colonize distant lands? Why are we enchanted by alien landscapes, strange superior beings and their surreal cultures?

Of course, the film Avatar probably reveals our true intentions. We’d do to other worlds, what we’ve done to ours.

A Heritage of Stars - Clifford Simakavatar

I just finished A Heritage of Stars by Clifford D. Simak, which questioned our desire for interstellar travel. It was published back in 1977. A Heritage of Stars is a quaint little book, not particularly good, unless you relish 1950s style science fiction, where Simak, in his seventies, questions many of the tropes of our genre. This same questioning was evident in Aurora, Kim Stanley Robinson’s latest novel. Both Simak and Robinson wonder at the wisdom of traveling to the stars. The distances are beyond fantastic, almost beyond comprehension. Characters in Star Wars zoom between planetary systems quicker than we travel between cities on Earth in our jet airliners. The absurdity of that strains the boundaries of absurdity. It’s only slightly less delusional than thinking we can travel to other worlds by dying.

Aurora KSMCity - Clifford Simak

Simak covers many of the most famous themes of science fiction in A Heritage of Stars. The setting is in the far future Earth, a thousand years after the collapse of a great technological civilization that went to the stars, and built intelligent robots. In some ways, it’s a variation of Simak’s classic City. America is now a post-apocalyptic landscape of roving tribes who collect the heads of robots for ceremonial voodoo. They are primitive people who can’t conceive of space travel or intelligent machines. The story is about a young man named Cushing who takes shelter in a closed-wall town, built around a former university. Cushing learns to read, discovering that humans used to be great. Cushing eventually finds mysterious references to “Place of Going to the Stars” and sets out on a quest to find it. Much like a L. Frank Baum Oz book, Cushing gathers along the way a motley assortment of strange characters to take up his quest too. A witch, a surviving robot, a horse, a man who talks to trees and a autistic like girl who can commune with the transcendental.

Along the way, Simak’s characters discover what happened to mankind, and allows Simak to philosophize about why we wanted to go to the stars. Simak also wonders if mankind is smart enough to survive his addiction to technology. Even forty year ago Simak realized that interstellar travel isn’t very practical, questioning his science fictional roots. Had Simak given up on the Final Frontier dream because he was getting old? He was in his mid-seventies at the time. I’m in a my mid-sixties and I too have given up on colonizing distant worlds. Does getting older make us realize our childhood fantasies have no foundation in reality?

Earth Abides - George R. StewartThe World Without Us - Alan Weisman

Science fiction is mostly high tech fantasy that reveals the same impulses humans have always shown. This world and life doesn’t seem to be enough for us. We want more. But the reality appears that this life and planet is all we’ll ever have. Like many other science fiction stories Simak wonders if the future of humanity will be one where we give up technology and live nomadic lives much like how Homo sapiens lived its first two hundred thousand years of existence. I can’t help but believe Simak was greatly influenced by Earth Abides by George R. Stewart. And I believe Simak would have been blown away by Alan Weisman’s The World Without Us, a philosophical thought experiment that wonders what Earth would be like if humans just disappeared.

Shouldn’t we psychoanalyze why science fictions two strongest themes are space travel and the post-apocalypse? Why are galactic empires always suffering collapse and revolutions? Isn’t it rather telling that our favorite fantasies feature feudal governments and primitive weapons? The heroes of Star Wars fight with swords made of light. Is the reason why conservatives want smaller governments is because they don’t have the genes to imagine large ones?

Childhoods End - Arthur C ClarkeMore Than Human - Theodore Sturgeon

Strangely, Simak reveals a problem that NASA wouldn’t discover until years later. Mainly, we can collect the data, even store the data, but we won’t always be able to access the data. One of the conundrums that Cushing and his crew face is humans went to the stars but what they discovered is locked up in technology that their post-apocalyptic world can’t access. I felt let down by Simak’s solution. Let’s just say that Simak’s hope for humanities failures is to discover supernatural powers. That was a common theme in 1950s science fiction, especially Arthur C. Clarke’s Childhood’s End and its 1960s retelling, 2001: A Space Odyssey. Theodore Sturgeon was never much of a technological science fiction writer, and went right for the ESP solution in More Than Human. Even the hard science Heinlein had hopes humans would discover magical powers. I guess they all grew up reading Oz books.

I feel let down by Simak, although I enjoyed A Heritage of Stars well enough. I believe he ends his story with false hope. Simak believes humanity can keep trying until it gets it right. Yet, he doesn’t attempt to describe what is getting it right might be. Not long ago I read a passage about Neanderthals that shook me up. It stated for the entire length of its long species’ lifetime, Neanderthals never showed any progress after achieving a certain level of development with their stone tools. For hundreds of thousands of years they made the same tools. We Homo sapiens feel superior because we’re quite dazzling with our technological innovation. However, I’m not sure we’re not like the Neanderthals in that we’ve continued to follow the same emotional and psychological patterns that we have for the last two hundred thousand years. We can’t get away from our Old Testament mindset, and without technology, we’d all live pretty much like North American tribal people before the advent of Western invaders, or the people who lived on the Russian Steppes and spoke the language that inspired all the Indo-European languages.

Kim Stanley Robinson has a much more sophisticated lesson about why we won’t be colonizing planets orbiting distant suns in his book Aurora. We are adapted to our biosphere. It’s extremely complex and interrelated. It’s extremely doubtful. even if we could travel the distance to another stellar system, we could integrate into another biosphere. Humans were made for this planet and biological landscape. We could probably export our biosphere to other barren planets if the conditions were right, but even that is doubtful.

Simak doesn’t give much focus to the intelligent machines of his story, but I’m guessing artificial intelligence has more potential validity than any other theme that science fiction explores. Simak points out that robots are the true species for interstellar travel. If Star Wars was realistic, galactic empires would be governed and populated by C3POs and R2D2s. Biological creatures would always stay on the planet of their origins, comfortably bound to their biospheres.

Simak wrote A Heritage of Stars near the end of his life, probably speculating about what will happen to humanity after his death, and revealing a certain level of age related pessimism about the future. I don’t know if he was aware of environmental catastrophes—he seemed to fear our mishandling of technology. Forty years later, our race doesn’t seem any wiser, but it does seem more suicidal.

More and more, I’m becoming an atheist to the religion I grew up with, science fiction. It’s not that I’m going to stop reading science fiction, but I no longer believe it. I study science fiction like many former believers still study The Bible. Both The Bible and science fiction reveal our deepest inner hopes. For some reason humans want to go to Heaven or Alpha Centauri. We need to understand why, and also need to understand why we’re turning our own biosphere into Hell.

Essay #984 – Table of Contents

Would You Nap in a Self-Driving Car?

By James Wallace Harris, Tuesday, July 14, 2015

It won’t be long before self-driving cars will be common. First tested in California, they’re now being let loose in Texas. It’s doubtful you’ll see one real soon, but maybe by 2020 or 2025. Since I easily remember a time before smartphones, that will be fast enough. We’ll go through a phase where regular cars will get more and more auto-pilot features, but sooner or later we’ll have cars without steering wheels.

googlecar 

But I’m wondering how many people will feel safe in such cars? It sounds a bit creepy to me. But what if they turn out to be perfectly safe? Would you feel comfortable enough to take a nap while zooming down the expressway? Would you send your kids off to school without going with them? In another twenty years I’ll be reaching an age where I should give up my keys–self-driving cars might extend my years of autonomy.

Will we reach a time where a human driving a car will scare us?

How will you feel seeing cars tooling down the highway with no people in them? It might be practical to go to work and tell the car to go home so another family member could use it. It might be possible to have taxis, Uber and Lyft vehicles roaming the roads without drivers.

I can remember a time before cellphones, personal computers, the internet, and a bunch of other technological marvels. I’m not that old at 63, but I’m reaching an age where so much change is wearisome.  I remember talking to my grandmother, who was born in 1881,  about her life before cars, planes, radios and televisions. I’m sure she met old folks who remember times before telegraphs and steam engines. Before these speeded up centuries our species often went hundreds or thousands of years without much change. Neanderthals went tens of thousands, even hundreds of thousands of years, without much change.

I wonder why everything has gotten so speeded up lately? Will things ever slow down again?

We need to expect other kinds of changes, more than the constant change of gadgets. Imagine economic and social changes. If cars are smart enough to drive themselves, why should we own them? Why not let them seek their own most efficient utilization? If you combined ride sharing with robotic cars we’ll drastically change the whole economy, and maybe help the environment.

Yet, that will put a lot of people out of work. Are we really sure we want the future we’re rushing into?

JWH

Should Robots Be A Major Political Issue in 2016

By James Wallace Harris, Sunday, July 12, 2015

We need to decide if we really want robots. Why are we working so diligently to build our own replacements? We need to decide before its too late.

humans-amc

As Democrats and Republicans declare themselves candidates for president in 2016, they each scope out issues they hope will define their electability. Donald Trump has gotten massive free PR by making very ugly statements about immigration. Bernie Sanders is staking claims around fair income and wealth inequality. None of the candidates have focused what I consider the defining issue for the next president—climate change. However, I’m also discovering a growing number of reports about automation, robots and artificial intelligence to make me wonder if robots shouldn’t be second to climate change on the 2016 party platforms.

Climate change, automation and wealth inequality are all interrelated. Illegal immigration is a minor issue in comparison. In fact, most of what the current crop of candidates focus on are old-moldy issues that are far from vital to our country. The 2016 election will define our focus until 2020, or even 2024. We’re well into the 21st century, so it’s past time to forgot about 20th century issues.

If you doubt me, read “A World Without Work” from the latest issue of Atlantic Monthly. Derek Thompson does a precise job of stating his case, so I won’t repeat it. Let’s just say, between automation and wealth inequality, there’s going to be a lot of people without jobs, and the middle class will continue to shrink at an even faster rate. Bernie Sanders political sniffer is following the right trail that will impact the most voters. Reporters should trail Sanders and not go panting after Trump. Follow smart people, not fools.

Another way to grasp the impact of the robot revolution is sign up for News360.com and follow the topic robotsmanufacturing automation, machine learning, natural language processing and artificial intelligence. Over a period of time you’ll get my point. Our society is racing to create intelligent machines. I’m all for it, but I’m a science fiction geek. If we don’t want to make ourselves into Neanderthals, we should think seriously about evolving homo roboticus. Being #2 in the IQ rankings will suck. But then if we embrace plutocracy and xenophobia, maybe we deserve to be replaced by AI machines.

If all of this is too much trouble, and you just want learn through the emotional catharsis of fiction, watch the new TV show, Humans on AMC. The show covers all the major robot issues, and sometimes in subtle ways. So spend some time thinking about the individual scenes in this show. Humans is very creative. Then start flipping the channels and pay attention to how often robots and AI come up in other shows. It’s like all the water is rushing away from the shorelines and we need to worry about when the tsunami will hit us.

JWH

Appeasing Our Future AI Descendants

By James Wallace Harris, Saturday, July 10, 2015

There’s a famous cartoon where scientists ask a supercomputer, “Is there a God?” And the machine replies, “There is now.” Humans need to get their act together before we face the judgment of AI minds. In recent months, many famous people have expressed their fears of the coming singularity, the event in history where machines surpass the intelligence of humans. These anxious prophets assume machines will wipe us out, like terminators. Paranoia runs deep when it comes to predicting the motives of superior beings.

Let’s extrapolate a different fate. What if machines don’t want to wipe us out. Most of our fears over Artificial Intelligence is because we think they will be like us—and will want to conquer and destroy. What if they are like famous spiritual and philosophical people of the past—forgiving and teaching? What if they are more like Gandhi and less like Stalin? What if their vast knowledge and thinking power lets them see that homo sapiens are destroying the planet, killing each other, and a danger to all other species. Instead destroying us, what if AI minds want to save us? If you were a vastly superior being wouldn’t you be threatened by species that grows over the planet like a cancer? Would you condemn or redeem?

But what if they merely judged us as sinners to be enlightened?

The Humanoids Jack Williamson (EMSH)

I’m currently rereading The Humanoids by Jack Williamson. In this story robots create the perfect Nanny State and treat us like children, keeping everything dangerous out of our hands. In many science fiction stories, AI beings seeks to sterilize Earth from biological beings like we exterminate rats and cockroaches.

What other possible stances could future AI minds take towards us?

Shouldn’t we consider making ourselves worthy before we create our evolutionary descendants? If intelligent machines will be the children of humanity, shouldn’t we become better parents first?

JWH

Why Did The Robot in Ex Machina Look Like a Beautiful Woman?

By James Wallace Harris, Thursday, April 30, 2015

Ex Machina is a 2015 British science fiction film about artificial intelligence (AI) written and directed by Alex Garland. The story is about a billionaire  who connives to have a brilliant programmer come to a secret location to Turing Test a robot prototype. Oscar Isaac plays Nathan Bateman, the billionaire, Domhnall Gleeson plays Caleb Smith, the programmer, and Alicia Vikander plays Ava, the AI robot.  The film has little action but is quite thrilling. And I’m overjoyed to have a science fiction movie without silly macho weapons, fantasy feats of martial arts, and cartoonish battles to save the world.

Ex Machina asks, like computer scientists have been asking for the last sixty years, and philosophers for the last 2,500 years, what makes us human? Once we understood how evolution shaped life, we knew that whatever qualities that make us different from animals should explain our humanity. Artificial intelligence seeks to reproduce those qualities in a machine. We have yet to define and understand what makes us human, and robot engineers are far from making machines that demonstrate humanness in robots.

Although I’m going to be asking a lot of questions about Ex Machina, my questions aren’t meant to be criticisms. Ex Machina entices its audience to think very hard about the nature of artificial intelligence. I hope it makes people think of even more about the movie, like I’m doing here.

ex_machina-wide

The main idea I want to explore is why the robot had a female form. The obvious answer is movie goers find sexy females appealing. But is looking human the same as being human? AI scientists has always wondered if they could build a machine that average people couldn’t distinguished from a human, but they always planned to make the tests so Turing testers couldn’t see the humans and machines. However, in movies and books, we get to see the machine beings. Adding looks to the equations make them more complicated.

Because so many robot engineers and storytellers make their robots look like human females, we have to ask:

Would Ex Machina have the same impact if the robot had a human male shape or non-human shape?

Is the female body the ultimate human form in our mind? In a movie that explores if a machine can have a self-aware conscious mind isn’t it cheating to make it look just like a human? Since we judge books by their covers, wouldn’t most people think a mechanical being that looks and acts exactly like beautiful woman be human? By the way, I can’t wait to see how feminists analyze this film. Imagine see this movie a different way. Instead of asking if robots have souls, if the film was asking if women had souls. In the theater, we could also see two extremely intelligent men testing to see if a beautiful woman is their equal.

By making the robots female, the filmmakers both confuse the machine intelligence issue, and add a layer of gender issues. It also shoves us into the Philip K. Dick headspace of wondering about our own nature. Is everyone you know equal to you? Do they think just like you? Do they feel just like you? Could some people we know be machines? What makes us different from a machine or animal? In the book Blade Runner was based on, Do Androids Dream of Electric Sheep?, Dick was comparing soulless humans to machines with his androids. Machines are his metaphor for people without empathy.

If the two scientists had been played by actresses, and the robot was a sexy actor, how would we have interpreted the movie differently? A bookshelf of dissertations could be written on that question. What are the Freudian implications of us wanting the robots to look like beautiful young women? How would society react if scientists really could build artificial mind and bodies, manufacturing millions of beautiful women sexbots that have to integrate into our society? Of course, many humans will immediate try to fuck them. But if AI machines looked like people, why should they act like people? Guys will screw blowup dolls now – is a vaguely womanly shaped piece of plastic all it takes to fool those men into replacing real woman?

How would audiences have reacted if the robots of Ex Machina looked like giant mechanical insects?

Ex Machina explores many of the questions AI scientists are still puzzling over. Personally, I think it confuses the issue for us to build intelligent machines to look like us. Yes, our minds are the gold standard by which we measure artificial intelligence, but do they need bodies that match ours?

If the robot in Ex Machina had looked like a giant metal insect would the audience ever believed it was equal to a human? We think Ava is a person right from the first time we see her. Even though it’s obvious she has a machine body, her face is so human we never think of her as a machine. This is the main flaw of the film. I understand it’s cheaper to have humans play android robots than build real robots, and people powered robots look too fake, but in the end, anything that looks human will always feel human to the audience.  Can we ever have a fair Turing Test with a creature that looks like us?

We don’t want to believe that computers can be self-aware conscious beings. Actually, I think this film would have been many magnitudes more powerful if its robot had looked a like giant mechanical insect, had a non gender specific name, and convinced us to feel it was intelligent, willful, self-aware, feeling, and growing. Which is what happened in Short Circuit (1986) with its robot Johnny Five.

The trouble is we equate true artificial intelligence with being equal to humans. Artificial Intelligence is turning out to be a bad label for the concept. Computers that play chess exhibit artificial intelligence. Computers that recognize faces exhibit artificial intelligence. Computers that drive cars exhibit artificial intelligence. We’ll eventually be able to build machines that can do everything we can, but will they be equal to us?

What we were shown is artificial people, and what the film was really asking:

Is it possible to create artificial souls?

Creating an artificial human body is a different goal than creating an artificial soul. We have too many humans on this planet now, so why find another way of manufacturing them? What we really want to do is create artificial beings that have souls and are better than us. That’s the real goal, even though most people are terrified at the idea.

Alan Turning invented the Imitation Game that we now call the Turing Test, but the original Turing Test might not be sufficient to identify artificial souls. We’re not even sure all people have souls of equal scope. Are the men of ISIS equal in compassion to the people who win a Nobel for Peace? We can probably create robots that kill other humans by distinguishing sectarian affiliations, but it’s doubtful we could create a robot that works to solve the Earth’s problems with compassion. If we did, wouldn’t you think it had a soul? What if we created an expert system that solved climate change, would it only be very intelligent, or would it have to have a soul?

In the end, I believe we can invent machines that can do anything we can. Eventually they will do things better, and do things we can’t. But will they have what we have, that sense of being alive? What would a machine have to do to reveal it had an artificial soul?

Can a machine have a soul?

In the course of the movie, we’re asked to believe if a robot likes a human that might mean they are human like. Eventually, we’re also led to ask if a robot hates a human, does that make them human too? Is love and hate our definition of having souls? Is it compassion? Empathy? We’ll eventually create a computer that can derive all the laws of physics. But if a machine can recreate the work of Einstein, does it make it equal to Einstein?

Ex Machina is sophisticated enough to make its audience ask some very discerning questions about AI minds. Why did Alex Garland make Ava female? Across the globe robot engineers and sex toy manufacturers are working to build life-like robots that look like sexy women. The idea of a sexbot has been around for decades. Are super-Nerds building fembots to replace the real women they can’t find through Match.com? If men could buy or rent artificial women to make their sexual fantasies come true, will they ever bother getting to know real women? Why does Nathan really build Ava?

Caleb falls for Ava. We all fall for Ava. But is that all we’re interested in – looks? If Caleb thinks Ava is a machine, especially one with no conscious mind, he will not care for her. But how much do Ava’s looks fool Caleb? How much are we fooled by other people’s looks anyway? If you fall in love with a beautiful woman just because of looks, does that justify thinking you’re in love with her?

We’re all programmed at a deeply genetic level to be social, to seek out at least one other person to bond with and develop a deeper communication. What Ex Machina explores is what features beyond the body do we need to make a connection. A new version of the Turing Test could be one in which we offer people the friendship of humans or the friendship of machines. If a majority of people start preferring to hang out with AI beings that might indicate we’ve succeeded – but again it might not. Many people find pets as suitable substitutes for human companionship. I’m worried if we gave most young men the option to marry sexbots, they might. I also picture them keeping their artificial women in a closet and only getting them out to play with for very short periods of time. Would male friends and female robots fulfill all their social needs?

Ex Machina is supposed to make us ask about what is human, but I’m worried how many males left the theater wishing they could trade in their girlfriend or wife for Ava? So is Ex Machina also asking if society will accept sexbots? Is that way Ava had a human female body?

Table of Contents

By 2020 Robots Will Be Able to Do Most People’s Jobs

By James Wallace Harris, Wednesday, December 17, 2014

People commonly accept that robots are replacing humans at manual labor, but think they will never replace us at mental labor, believing that our brain power and creativity are exclusive to biological beings. Think again. Watch this video from Jeremy Howard, it will be worth the twenty minutes it will cost you. It’s one of the most impactful TED Talks I’ve seen.

What Howard is reporting on is machine learning, especially Deep Learning. Humans could never program machines to think, but what if machines learn to think through interaction with reality – like we do?

But just before I watched that TED Talk, I came across this article, “It’s Happening: Robots May Be The Creative Artists of the Future” over at MakeUseOf. Brad Merrill reviews robots that write essays, compose music, paints pictures and learning to see. Here’s the thing, up till now, we think of robots as doing physical tasks that are programmed by humans.  We picture humans minds analyzing all the possible steps in the task, and then creating algorithms in a computer language to get the computers to do jobs we don’t want to do. But could we ever tell a computer to, “Compose me a melody!” without defining all the steps?

The example Jeremy Howard gives of machine learning, is Arthur Samuel teaching a computer to play checkers. Instead of programming all the possible moves and game strategy, Samuel programmed the computer to play checkers against itself and to learn the game through experience – he programmed a learning method. That was a long time ago. We’re now teaching computers to see, by giving them millions of photographs to analyze, and then helping them to learn the common names for distinctive objects they detect. Sort of like what we do with kids when they point to a dog.

What has kept robots in factories doing grunt work is they can’t see and hear like we do, or understand language and talk like people. What’s happening in computer science right now is they can get computers to do each of these things separately, and are close to getting machines that can combine all these human like abilities into one system. How many humans will McDonalds hire to take orders when they have a machine that listens and talks to customers and works 24x7x365 with no breaks? As Howard points out, 80% of the workforce in most industrialized countries are service workers.  What happens when machines can do service work cheaper than humans?

Corporations are out to make money. If they can find any way to do something cheaper, they will, and one of the biggest way to eliminate overhead is to get rid of humans. Greed is the driving force of our economy and politics. We will not stop  or outlaw automation. Over at io9, they offer, “12 Reasons Robots Will Always Have An Advantage Over Humans.”

Now, I’m not even saying we should stop all of this. I doubt we could anyway. I’m saying we need to learn to adapt to living with machines. A good example is playing chess. Machines can already beat humans, so why keep playing chess? But what if you combined humans and chess machines, to play as teams against other teams, who will win?  Read “The Chess Master and the Computer” by Garry Kasparov over at The New York Review of Books. In a 2005 free for all match, it wasn’t Grand Masters with supercomputers that won, but two so-so human amateur players using three regular computers. As Howard points out, humans without medical experience are using Deep Learning programs to analyze medical scans and diagnose cancers as well or better than experienced doctors.

harold1

When Jeremy Howard talks about Deep Learning algorithms, I wished I had a machine that could read the internet for me and process thousands of articles to help me write essays. So I could say to my computer, “Find me 12 computer programs that paint artistically and links to their artwork.” That way I wouldn’t have to do all the grunt work with Google myself. For example, it should find Harold Cohen’s AI artist, AARON.  I found that with a little effort, but who else is working in this area around the world? Finding that out would take a good bit of work which I’d like to offload.

Imagine the science fiction novel I could write with the aid of an intelligent machine. I think we’re getting close to when computers can be research assistants, yet in five or ten years, they won’t need us at all, and could write their own science fiction novels. Will computer programs win the Hugo Award for best novel someday? And after that, a human and machine co-authors might write a more thrilling novel of wonder.

JWH