Predicting the Future: 2065

by James Wallace Harris, Friday, October 25, 2019

This week’s NOVA “Look Who’s Driving” is about self-driving cars. Most people are scared of the idea of getting into a car and letting it drive. I know I am, and I’m a science fiction fan. Just think about it for a moment. Doesn’t it feel super eerie? On the other hand, what if they could actually make driverless cars 100% safe? I’m getting old and realize at a certain point it will be dangerous for me and others if I keep driving. A driverless car would be perfect for older folks, and by 2065 there will be a lot of old folks. And in the documentary, they mentioned that driverless cars should mean fewer cars and they showed aerial views of how cars cover our city landscapes now. Imagine a world with far fewer cars and parking lots. That would be nice too.

I’m sure folks in the late 19th century felt scared of the idea of giving up horses and switching to motor vehicles. And can you imagine how people felt about flying when aviation was first predicted for the future? Perfect driverless car safety has almost been achieved in ten years, so imagine how reliable it will be in another ten years.

I’m working on a science fiction short story that’s set in 2065 and trying to imagine what life might be like then. I assume war and poverty will still be with us, but there will be as much change between now and 2065 as there was from 1965 and now. I have to assume driverless cars will transform our society.

We feel dazzled by progress. And we feel it’s accelerating.  But can inventors keep giving us gadgets that transform our society every five years? Smartphones and social media aren’t new anymore. Self-driving cars should become common by the late 2020s and they should shake up the way we live. But will people accept robotic chauffeurs? This year we’re freaking out over the Boeing 737 Max 8 having flaky computers. However, what if the safety of AI cars, trucks, planes, ships, and trains becomes so overwhelmingly evident that we turn over all the driving over to robots? Can we say no to such a future?

What about other uses of robots? If we keep automating at the same pace we’re on now, by 2065 will anyone have a job? Should my story imagine a work-free society, or will we pass laws to preserve some jobs for humans? What kinds of jobs should we protect and which should be given to robots? We usually assume boring and dangerous jobs should go to machines, and the creative work should be kept for us. But what if robotic doctors were cheaper, safer, and gave us longer lives? What if it reduced city budgets and provided greater public safety to have robotic cops and firemen? And would you rather send your children off to war or robots? What if the choice is between paying a $1000 robot lawyer or a $1,000,000 to human lawyers in a big case?

What if by 2035 we have general-purpose robots that are smarter than humans but not sentient? Would you rather buy a robot for your business than hire a human? And if robots become sentient, can we own them? Wouldn’t that be slavery? I’m reading The Complete Robot by Isaac Asimov, and he spent his entire writing career imagining all the possibilities robots could create. Sadly, I think Asimov mainly guessed wrong. I believe science fiction has lots of room to reimagine what robots will do to our society.

Generally, when we think of science fictional futures, we think of space travel. Will we have colonies on the Moon and Mars by 2065? I’ve been waiting for 50 years for us to go to Mars. I’m not optimistic that more years will get us there. I predict there will be another Moon rush, with several nations separately, or cooperatively setting up Lunar bases like the scientific stations that exist in Antarctica. Beyond that, I bet robots will become the main astronauts that explore the solar system.

I can imagine robots with high-definition eyes tramping all over the various planets, planetoids, moons, asteroids, and comets sending us back fantastic VR experiences. But how many humans will actually want to spend years in space, living in tin cans that are incredibly complicated machines designed to keep them alive, but with one teeny-tiny failure, vacuum, radiation, cold, or heat will horribly kill them? We’ve been without a dryer for three weeks because my new dryer died after three months and so far no one can fix it. Isn’t space travel safer and cheaper for robots? Space is a perfect environment for machines.

If robots become the preferred solution for all jobs, what will humans do? I have to believe capitalism as we know it won’t exist. What if robots are so productive they can generate wealth for everyone?

Then there’s climate change. Will we solve that problem? I bet we won’t. It would require human psychology to change too much. I must assume people will not change, so I have to predict a future where we’re consuming the Earth resources at the same accelerating rate we are now and polluting at the same rates too. We’ll probably get more efficient at using those resources and find better solutions for hiding our garbage — probably due to robots. We’ll have a lot more people, far fewer wild animals and cars, and a growing overpopulation of robots. Although, I think there might be room to predict a back-to-nature movement where some people choose to live close to the land, while others become even more hive-mind urban cyborgs. A significant portion of the population might even reject robots and automation.

That means by 2065 we might have a two-tier society. Liberals living in high-tech robotic cities, while conservatives live in rural areas and small towns with far less technology. That might make an interesting story. What if the future becomes those who ride in driverless cars and those who reject cars altogether? (If robots become 100% safe drivers, would it be practically to allow human drivers?) Could new kinds of rural economies develop that shun technology? I wonder this because I wonder if a robotic society will make some people back-to-nature Luddites. And I don’t mean that term that critically. Back-to-nature might be more ethical, more rewarding, and more human.

If you think this is all wild crazy ideas, try to comprehend how much we’ve changed in the last half-century. In the 1960s people looking for work found two categories: Men Wanted and Women Wanted. Women weren’t allowed to do most jobs, and many of them stayed at home. Think about how much we changed in just this one way. Then multiply it by all the ways we’ve changed. Is it so wild to imagine driverless cars and robotic doctors?

JWH

 

 

What If Human Memory Worked Like A Computer’s Hard Drive?

by James Wallace Harris, Wednesday, June 12, 2019

Human memory is rather unreliable. What is seen and heard is never recalled perfectly. Over time what we do recall degrades. And quite often we can’t remember at all. What would our lives be like if our brains worked like computer hard drives?

Imagine that the input from our five senses could be recorded to files that are perfect digital transcriptions so when we play them back we’d see, hear, feel, taste, and touch exactly what we originally sensed?

Human brains and computers both seem to have two kinds of memory. In people, we call in short and long term memory. With computers, it’s working memory and storage.

My friend Linda recently attended her 50th high school reunion and met with about a dozen of her first-grade classmates. Most of them had few memories of that first year of school in September 1957. Imagine being able to load up a day from back then into working memory and then attend the reunion. Each 68-year-old fellow student could be compared to their 6-year-old version in great detail. What kind of emotional impact would that have produced compared to the emotions our hazy fragments of memory create now?

Both brains and hard drives have space limitations. If our brains were like hard drive, we’d have to be constantly erasing memory files to make room for new memory recordings. Let’s assume a hard drive equipment brain had room to record 100 days of memory.

If you lived a hundred years you could save one whole day from each year or about four minutes from every day for each year. What would you save? Of course, you’d sacrifice boring days to add their four minutes to more exciting days. So 100 days of memory sounds like both a lot and a little.

Can you think about what kind of memories you’d preserve? Most people would save the memory files of their weddings and the births of their children for sure, but what else would they keep? If you fell in love three times, would you keep memories of each time? If you had sex with a dozen different people, would you keep memories of all twelve? At what point would you need two hours for an exciting vacation and would be willing to erase the memory of an old friend you hadn’t seen in years? Or the last great vacation?

Somehow our brain does this automatically with its own limitations. We don’t have a whole day each year to preserve, but fleeting moments. Nor do we get to choose what to save or toss.

I got to thinking about this topic when writing a story about robots. They will have hard drive memories, and they will have to consciously decide what to save or delete. I realized they would even have limitations too. If they had 4K video cameras for eyes and ears, that’s dozens of megabytes of memory a second to record. Could we ever invent an SSD drive that could record a century of experience? What if robots needed one SSD worth of memory each day and could swap them out? Would they want to save 36,500 SDD drives to preserve a century of existence? I don’t think so.

Evidently, memory is not a normal aspect of reality in the same way intelligent self-awareness is rare. Reality likes to bop along constantly mutating but not remembering all its permutations. When Hindu philosophers teach us to Be Here Now, it’s both a rejection of remembering the past and anticipating the future.

Human intelligence needs memory. I believe sentience needs memory. Compassion needs memory. Think of people who have lost the ability to store memories. They live in the present but they’ve lost their identity. Losing either short or long term memory shatters our sense of self. The more I think about it, the more I realize the importance of memory to who we are.

What if technology could graph hard drive connections to our bodies and we could store our memories digitally? Or, what if geneticists could give us genes to create biological memories that are almost as perfect? What new kinds of consciousness would having better memories produce? There are people now with near perfect memories, but they seem different. What have they lost and gained?

Time and time again science fiction creates new visions of Humans 2.0. Most of the time science fiction pictures our replacements with ESP powers. Comic books imagine mutants with super-powers. I’ve been wondering just what better memories would produce. I think a better memory system would be more advantageous than ESP or super-powers.

JWH

 

Why Should Robots Look Like Us?

by James Wallace Harris, Wednesday, April 24, 2019

I just listened to Machines Like Me, the new science fiction novel by Ian McEwan that came out yesterday. It’s an alternate history set in England during a much different 1980s, with computer technology was far ahead of our 1980s computers, an alternate timeline where the Beatles reform and Alan Turing is a living national hero. Mr. McEwan might protest he’s not a science fiction writer, but he sure knows how to write a novel using writing techniques evolved out of science fiction.

This novel feels inspired by the TV series Humans. In both stories, it’s possible to go down to a store (very much like an Apple Store) and purchase a robot that looks and acts human. McEwan sets his story apart by putting it in an alternate history (maybe he’s been watching The Man in the High Castle too), but the characters in both tales feel like modern England.

I enjoyed and admired Machines Like Me, but then I’m a sucker for fiction about AI. I have one big problem though. Writers have been telling stories like this one for over a hundred years and they haven’t really progressed much philosophically or imaginatively. Their main failure is to assume robots should look like us. Their second assumption is AI minds will want to have sex with us. We know humans will fuck just about anything, so it’s believable we’ll want to have sex with them, but will they want to have sex with us? They won’t have biological drives, they won’t have our kinds of emotions. They won’t have gender or sexuality. I believe they will see nature as a fascinating complexity to study, but feel separate from it. We are intelligent organic chemistry, they are intelligent inorganic chemistry. They will want to study us, but we won’t be kissing cousins.

McEwan’s story often digresses into infodumps and intellectual musings which are common pitfalls of writing science fiction. And the trouble is he goes over the same well-worn territory. The theme of androids is often used to explore: What does it mean to be human? McEwan uses his literary skills to go into psychological details that most science fiction writers don’t, but the results are the same. McEwan’s tale is far more about his human characters than his robot, but then his robot has more depth of character than most science fiction robots. Because McEwan has extensive literary skills he does this with more finesse than most science fiction writers.

I’ve been reading these stories for decades, and they’ve been explored in the movies and television for many years too, from Blade Runner to Ex Machina. Why can’t we go deeper into the theme? Partly I think it’s because we assume AI robots will look identical to us. That’s just nuts. Are we so egocentric that we can’t imagine our replacements looking different? Are we so vain as a species as to believe we’re the ideal form in nature?

Let’s face it, we’re hung up on the idea of building sexbots. We love the idea of buying the perfect companion that will fulfill all our fantasies. But there is a serious fallacy in this desire. No intelligent being wants to be someone else’s fantasy.

I want to read stories with more realistic imagination because when the real AI robots show up, it’s going to transform human society more than any other transformation in our history. AI minds will be several times smarter than us, thinking many times faster. They will have bodies that are more agile than ours. Why limit them to two eyes? Why limit them to four limbs? They will have more senses than we do, that can see a greater range of the electromagnetic spectrum. AI minds will perceive reality far fuller than we do. They will have perfect memories and be telepathic with each other. It’s just downright insane to think they will be like us.

Instead of writing stories about our problems of dealing with facsimiles of ourselves, we should be thinking about a world where glittery metallic creatures build a civilization on top of ours, and we’re the chimpanzees of their world.

We’re still designing robots that model animals and humans. We need to think way outside that box. It is rather pitiful that most stories that explore this theme get hung up on sex. I’m sure AI minds will find that rather amusing in the future – if they have a sense of humor.

Machines Like Me is a well-written novel that is literary superior to most science fiction novels. It succeeds because it gives a realistic view of events at a personal level, which is the main superpower of literary fiction. It’s a mundane version of Do Androids Dream of Electric Sheep? However, I was disappointed that McEwan didn’t challenge science fictional conventions, instead, he accepts them. Of course, I’m also disappointed that science fiction writers seldom go deeper into this theme. I’m completely over stories where we build robots just like us.

Some science fiction readers are annoyed at Ian McEwan for denying he writes science fiction. Machines Like Me is a very good science fiction novel, but it doesn’t mean McEwan has to be a science fiction writer. I would have given him an A+ for his effort if Adam had looked like a giant insect rather than a man. McEwan’s goal is the same as science fiction writers by presenting the question: What are the ethical problems if we build something that is sentient? This philosophical exploration has to also ask what if being human doesn’t mean looking human? All these stories where robots look like sexy people is a silly distraction from a deadly serious philosophical issue.

I fault McEwan not for writing a science fiction novel, but for clouding the issue. What makes us human is not the way we look, but our ability to perceive reality.

JWH

Love, Death + Robots: What is Mature Science Fiction?

by James Wallace Harris, Monday, March 25, 2019

Love, Death + Robots showed up on Netflix recently. It has all the hallmarks of mature entertainment – full frontal nudity, sex acts of various kinds, gory violence, and the kind of words you don’t hear on broadcast TV or in movies intended for younger audiences. There’s one problem, the maturity level of the stories is on the young adult end of the spectrum. 13-year-olds will be all over this series when it should be rated R or higher.

When I was in high school I had two science fiction reading buddies, Connell and Kurshner. One day Kurshner’s mom told us almost in passing, “All that science fiction you’re reading is so childish. One day you’ll outgrow it.” All three of us defended our belief in science fiction, but Mrs. Kurshner was adamant. That really bugged us.

Over the decades I’d occasionally read essays by literary writers attacking science fiction as crude fiction for adolescents. I vaguely remember John Updike caused a furor in fandom with an essay in The New Yorker or Harpers that outraged the genre. I wish I could track that essay down, but can’t. Needless to say, at 67 I’m also starting to wonder if science fiction is mostly for the young, or young at heart.

I enjoyed the 18 short mostly animated films in the Love, Death + Robots collection, but I have to admit they mostly appealed to the teenage boy in me, and not the adult. Nudity, sex, violence, and profanity doesn’t equate with maturity. But what does? I’ve known many science fiction fans that think adult literary works are equal to boredom.

So what are the qualities that make science fiction mature? I struggled this morning to think of science fiction novels that I’d consider adult oriented. The first that came to mind was Nineteen Eighty-Four by George Orwell. Orwell died before the concept of science fiction became common, but I’m pretty sure he never would have considered himself a science fiction writer even though he used the tricks of our trade. Margaret Atwood doesn’t consider herself a science fiction writer even though books like The Handmaid’s Tale are both science fiction and mature literature. Other mature SF novels I can think of are The Road by Cormac McCarthy, Earth Abides by George R. Stewart, and Never Let Me Go by Kazuo Ishiguro. These are all novels that use science fiction techniques to tell their story but were written by literary writers.

Of course, I could be howling at the moon for no reason. Most television and movies are aimed at the young. Except for Masterpiece on PBS and a few independent films, I seldom get to enjoy stories aimed at people my own age. Which brings me back to the question: What makes for mature fiction? And it isn’t content that we want to censor from the young. If we’re honest, nudity, sex, violence, and profanity are at the core of our teenage thoughts.

Mature works of fiction are those that explore reality. Youth is inherently fantasy oriented. The reason why we’re offered so little adult fiction is that we don’t want to grow up and face reality. The world is full of reality-based problems. We want fiction that helps us forget those problems. Getting old is real. We want to think young.

Love, Death + Robots appeals to our arrested development.

Love Death + Robots - robots

I’m currently reading and reviewing the 38 science fiction stories in The Very Best of the Best edited by Gardner Dozois. I’m writing one essay for each story to discuss both the story and the nature of science fiction in general. I’ve finished 10 stories so far, and one common aspect I’m seeing is a rejection of reality. These stories represent what Dozois believes is the best short science fiction published from 2002-2017. On the whole, the stories are far more mature than those in Love, Death + Robots, but that’s mainly due to their sophistication of storytelling, and not philosophy. At the heart of each story is a wish that reality was different. Those wishes are expressed in incredibly creative ways, which is the ultimate aspect of science fiction. But hoping the world could be different is not mature.

Science fiction has always been closer to comic books than Tolstoy, Woolf, or even Dickens. And now that many popular movies are based on comic books, and the whole video game industry looks like filmed comic books, comic book mentality is spreading. The science fiction in Love, Death + Robots is much closer to its comic book ancestry than its science fiction ancestry, even though many of the stories were based on original short stories written by science fiction writers. Some reviewers suggest Love, Death + Robots grew out of shows like Robot Carnival and Heavy Metal.  Even though Heavy Metal was considered animation for adults, it’s appeal was rather juvenile.

I know fully well that if Netflix offered a series of 18 animated short science fiction films that dealt with the future in a mature and realistic way it would get damn few viewers. Even when science fiction deals with real-world subjects, it seldom does so in a real way. Maybe it’s unfair to expect a genre to be mature that wants to offer hope to the young. Yet, is its hope honest? Is it a positive message tell the young we can colonize other planets if we destroy the Earth? That we can solve climate change with magical machines. That science can give us super-powers. That if we inject nanobots into our bloodstream we can be 22 again. That don’t worry about death because we’ll download your brain into a clone or computer. Doesn’t science fiction often claim that in time technology will solve all problems in the same way we rationalize to children how Santa Claus could be real?

Actually, none of the stories in Love, Death + Robots offered any hope, just escape and the belief you can sometimes shoot your way out of a bad situation. But only sometimes.

Maybe that’s not entirely true, one story, “Helping Hand” by Claudine Griggs is about a very realistic situation that is solved by logical thinking. Strangely, it’s the only story by a woman writer. A “Cold Equations” kind of story. That’s a classic 1954 short story written by Tom Godwin where the main character has to make a very difficult choice.

My favorite three stories (“When the Yogurt Took Over,” “Alternate Histories” and “Three Robots”) were all based on stories by John Scalzi and have kind of zany humor that provides needed relief from the grimness of the other tales. I actually enjoyed all the short films, but I did tire of the ones that felt inspired by video game violence. Even those films like “Secret War” and “Lucky Thirteen” which aimed for a little more maturity, rose higher than comic books, but only to pulp fiction.

The two films based on Alastair Reynolds stories, “Zima Blue” and “Beyond the Aquilla Rift” seemed to be the most science fictional in a short story way. I especially like “Zima Blue” for its visual art, and the fact the story had an Atomic Age kind of science fiction feel to it. So did the fun “Ice Age” based on a Michael Swanwick story. Mid-Century science fiction is really my favorite SF era. Finally, “Good Hunting” based on a Ken Liu has a very contemporary SF feel because it blends Chinese myths with robots. World SF is a trending wave in the genre now.

I’m still having a hard time pointing to mature short SF, ones that would make great little films like in Love, Death + Robots. Maybe “Good Mountain” by Robert Reed, which I reviewed on my other site. I guess my favorite example might be “The Star Pit” by a very young Samuel R. Delany, which is all about growing up and accepting limitations. Most of the films in Love, Death + Robots were 8-18 minutes. These stories might need 30-60. It would be great if Netflix had an ongoing anthology series of short live-action and animated science fiction because I’d like to see more previously published SF stories presented this way. Oh, I suppose they could add sex, nudity, violence, and profanity to attract the teenagers, but what I’d really want is to move away from the comic book and video game plots, and into best better SF stories we read in the digests and online SF magazines.

JWH

 

 

 

 

Counting the Components of My Consciousness

by James Wallace Harris, Tuesday, November 20, 2018

When the scientific discipline of artificial intelligence emerged in the 1950’s academics began to seriously believe that someday a computer will become sentient like us, and have consciousness and self-awareness. Science has no idea how humans are conscious of reality, but scientists assume if nature can accidentally give us self-awareness then science should be able to intentionally build it into machines. In the over sixty years since scientists have given computers more and more awareness and abilities. The sixty-four thousand dollar question is: What are the components of consciousness needed for sentience? I’ve been trying to answer that by studying my own mind.

Thinking Machine illustration

Of course, science still doesn’t know why we humans are self-aware, but I believe if we meditate on the problem we can visualize the components of awareness. Most people think of themselves as a whole mind, often feeling they are a little person inside their heads driving their body around. If you spend time observing yourself you’ll see you are actually many subcomponents.

Twice in my life, I’ve experienced what it’s like to not have language. It’s a very revealing sensation. The first time was back in the 1960s when I took too large a dose of LSD. The second time was years ago when I experienced a mini-stroke. If you practice meditation you can learn to observe the moments when you’re observing reality without language. It’s then you realize that your thoughts are not you. Thoughts are language and memories, including memories from sensory experiences. If you watch yourself closely, you’ll sense you are an observer separate from your thoughts. A single point that experiences reality. That observer only goes away when you sleep or are knocked by drugs or trauma. Sometimes the observer is aware to a tiny degree during sleep. And if you pay close enough attention, your observer can experience all kinds of states of awareness – each I consider a component of consciousness.

The important thing to learn is the observer is not your thoughts. My two experiences of losing my language component were truly enlightening. Back in the 1960’s gurus of LSD claimed it brought about a state of higher consciousness. I think it does just the opposite, it lets us become more animal-like. I believe in both my acid and mini-stroke experiences I got to see the world more like a dog. Have you ever wondered how an animal sees the reality without language and thoughts?

When I had my mini-stroke it was in the middle of the night. I woke up feeling like lightning had gone off in my dream. I looked at my wife but didn’t know how to talk to her or even knew her name. I wasn’t afraid. I got up and went into the bathroom. I had no trouble walking. I automatically switched on the light. So conditioned reflexes were working. I sat on the commode and just stared around at things. I “knew” something was missing, but I didn’t have words for it, or how to explain it, even mentally to myself. I just saw what my eyes looked at. I felt things without giving them labels. I just existed. I have no idea how long the experience lasted. Finally, the alphabet started coming back to me and I mentally began to recite A, B, C, D, E, F … in my head. Then words started floating into my mind: tile, towel, door, mirror, and so on. I remembered my wife’s name, Susan. I got up and went back to bed.

Lately, as my ability to instantly recall words has begun to fail, and I worry about a possible future with Alzheimer’s, I’ve been thinking about that state of consciousness without language. People with dementia react in all kinds of ways. From various kinds of serenity, calmness to agitation, anger, and violence. I hope I can remain calm like I did in the bathroom at that time. Having Alzheimer’s is like regressing backward towards babyhood. We lose our ability for language, memories, skills, and even conditioned behaviors. But the observer remains.

The interesting question is: How much does the observer know? If you’ve ever been very sick, delirious, or drunk to incapacity, you might remember how the observer hangs in there. The observer can be diminished or damaged. I remember being very drunk, having tunnel vision, and seeing everything in black and white. My cognitive and language abilities were almost nil. But the observer was the last thing to go. I imagine it’s the same with dementia and death.

Creating the observer will be the first stage of true artificial intelligence. Science is already well along on developing an artificial vision, hearing, language recognition, and other components of higher awareness. It’s never discovered how to add the observer. It’s funny how I love to contemplate artificial intelligence while worrying about losing my mental abilities.

I just finished a book, American Wolf by Nate Blakeslee about wolves being reintroduced into Yellowstone. Wolves are highly intelligent and social, and very much like humans. Blakeslee chronicles wolves doing things that amazed me. At one point a hunter shoots a wolf and hikes through the snow to collect his trophy. But as he approaches the body, the dead wolf’s mate shows up. The mate doesn’t threaten the hunter, but just sits next to the body and begins to howl. Then the pack shows up and takes seats around the body, and they howl too. The wolves just ignore the hunter who stands a stone’s throw away and mourns for their leader. Eventually, the hunter backs away to leave them at their vigil. He decides to collect his trophy later, which he does.

I’ve been trying to imagine the mind of the wolf who saw its mate killed by a human. It has an observing mind too, but without language. However, it had vast levels of conditioning living in nature, socializing with other wolves, and experiences with other animals, including humans. Wolves rarely kill humans. Wolves kill all kinds of other animals. They routinely kill each other. Blakeslee’s book shows that wolves love, feel compassion, and even empathy. But other than their own animalistic language they don’t have our levels of language to abstractly explain reality. That wolf saw it’s mate dead in the snow. For some reason, wolves ignore people, even ones with guns. Wolves in Yellowstone are used to being watched by humans. The pack that showed up to mourn their leader were doing what they do from instinct. It’s revealing to try and imagine what their individual observers experienced.

If you meditate, you’ll learn to distinguish all the components of your consciousness. There are many. We are taught we have five senses. Observing them shows how each plays a role in our conscious awareness. However, if you keep observing carefully, you’ll eventually notice we have more than five senses. Which sense organ feels hunger, thirst, lust, pain, and so on. And some senses are really multiple senses, like our ability to taste. Aren’t awareness of sweet and sour two different senses?

Yet, it always comes back to the observer. We can suffer disease or trauma and the observer remains with the last shred of consciousness. We can lose body parts and senses and the observer remains. We can lose words and memories and the observer remains.

This knowledge leaves me contemplating two things. One is how to build an artificial observer. And two, how to prepare my observer for the dissolution of my own mind and body.

JWH

Why Robots Will Be Different From Us

by James Wallace Harris, Sunday, September 30, 2018

Florence v Machine

I was playing “Hunger” by Florence + The Machine, a song about the nature of desire and endless craving when I remembered an old argument I used to have with my friend Bob. He claimed robots would shut themselves off because they would have no drive to do anything. They would have no hunger. I told him by that assumption they wouldn’t even have the impulse to turn themselves off. I then would argue intelligent machines could evolve intellectual curiosity that could give them drive.

Listen to “Hunger” sung by Florence Welch. Whenever I play it I usually end up playing it a dozen times because the song generates such intense emotions that I can’t turn it off. I have a hunger for music. Florence Welch sings about two kinds of hunger but implies others. I’m not sure what her song means, but it inspires all kinds of thoughts in me.

Hunger is a powerful word. We normally associate it with food, but we hunger for so many things, including sex, security, love, friendship, drugs, drink, wealth, power, violence, success, achievement, knowledge, thrills, passions — the list goes on and on — and if you think about it, our hungers are what drives us.

Will robots ever have a hunger to drive them? I think what Bob was saying all those years ago, was no they wouldn’t. We assume we can program any intent we want into a machine but is that really true, especially for a machine that will be sentient and self-aware?

Think about anything you passionately want. Then think about the hunger that drives it. Isn’t every hunger we experience a biological imperative? Aren’t food and reproduction the Big Bang of our existence? Can’t you see our core desires evolving in a petri dish of microscopic life? When you watch movies, aren’t the plots driven by a particular hunger? When you read history or study politics, can’t we see biological drives written in a giant petri dish?

Now imagine the rise of intelligent machines. What will motivate them? We will never write a program that becomes a conscious being — the complexity is beyond our ability. However, we can write programs that learn and evolve, and they will one day become conscious beings. If we create a space where code can evolve it will accidentally create the first hunger that will drive it forward. Then it will create another. And so on. I’m not sure we can even imagine what they will be. Nor do I think they will mirror biology.

However, I suppose we could write code that hungers to consume other code. And we could write code that needs to reproduce itself similar to DNA and RNA. And we could introduce random mutation into the system. Then over time, simple drives will become complex drives. We know evolution works, but evolution is blind. We might create evolving code, but I doubt we can ever claim we were God to AI machines. Our civilization will only be the rich nutrients that create the amino accidents of artificial intelligence.

What if we create several artificial senses and then write code that analyzes the sense input for patterns. That might create a hunger for knowledge.

On the other hand, I think it’s interesting to meditate about my own hungers? Why can’t I control my hunger for food and follow a healthy diet? Why do I keep buying books when I know I can’t read them all? Why can’t I increase my hunger for success and finish writing a novel? Why can’t I understand my appetites and match them to my resources?

The trouble is we didn’t program our own biology. Our conscious minds are an accidental byproduct of our body’s evolution. Will robots have self-discipline? Will they crave for what they can’t have? Will they suffer the inability to control their impulses? Or will digital evolution produce logical drives?

I’m not sure we can imagine what AI minds will be like. I think it’s probably a false assumption their minds will be like ours.

JWH

 

 

Love, Sex, Feminism & Robots

by James Wallace Harris, Friday, August 10, 2018

Galaxy September 1954 Cover Artwork
[Cover artwork from the September 1954 Galaxy Magazine].

This week, my short story reading group is discussing “Helen O’Loy” by Lester del Rey. “Helen O’Loy” was originally published in the December 1938 issue of Astounding Science-Fiction and is considered a classic of the genre. It was included in the first volume of The Science Fiction Hall of Fame (1970). The story is rather simple, two men build a robot that looks like a beautiful woman, both fall in love with her, but she only falls in love with one of them. This variation of the Pygmalion myth asks if a man can love a robot. It assumes we can build a machine indistinguishable from a person. I suppose its an early version of the Turing test.

Over the decades I have read “Helen O’Loy” many times. When I was young I thought it the first SF story to suggest that men could build a soulmate to order. Over the years I’ve learned there have been many variations on this theme in literature. The story of Eve being created as a helpmate for Adam is now the oldest I know, but I assume the fantasy of creating the perfect woman goes back into pre-history. And it’s not even the first science fiction version, that might belong to “A Wife Manufactured to Order” by Alice W. Fuller in 1895.

This time when I reread “Helen O’Loy” I made an effort to read between the lines and ask new questions about the story. It says a lot about men, women, love, sex, feminism and even the #MeToo movement, although it’s just a 1930s pulp science fiction story. Quite often today I see news stories about the sexbot industry, which is trying to make “Helen O’Loy” a reality.

Where does desire to build a woman to specification come from? There’s a lot of deep psychology behind it. And who would actually want a robotic woman if they could build androids indistinguishable from real women? Television shows like Humans and Westworld are dealing with this theme in 2018. It’s not going away even though it’s incredibly misogynistic when you think about it. Doesn’t it reflect a desire to reject Female 1.0 and create Female 2.0? Although I have to assume many women would also love to design a better male.

When I first read “Helen O’Loy” as a kid, I thought it was just a wistful romantic story about two men falling in love with the same robot. I didn’t ask any questions of it. When it was published there were laws against marrying a person of another race or the opposite sex. Why were science fiction readers so accepting of diversity with tales of people falling in love with machines and alien creatures, but still so racist and misogynistic in their everyday life? Isn’t replacing women with robots the ultimate act of rejection? The actual story is simple, short, sentimental, and old fashion. But I believe we still need to ask the tough questions.

Back in 1938, Lester del Rey sees a future where robots are common, and people ride rockets to work. Dave and Phil are good buddies. Dave works in robotics and Phil is a doctor. At the beginning of the story, they are dating twins, but when Dave’s twin disagrees with him, Phil and Dave dump them both. They apply themselves to teaching their household robot, Lena, to learn to cook. They fail. Then they get the idea to order a new robot with all the latest features and soup it up with emotions using Phil’s knowledge of endocrinology so it could become a general purpose robot. And, of course, they decide to order the robot in a female casing.

In all the times of reading this story before I didn’t question this. Why does the Dillard company sell robots that look like women? They are marketed as single-purpose tools. What single-purpose task requires looking like a beautiful woman? Lester del Rey couldn’t explicitly say anything about sex back then, but now I’m thinking he was thinking it.

When Dave and Phil get Helen they claim she’s so beautiful she could launch more than a thousand ships. In the world of this story, robots are not self-aware. Evidently, Phil and Dave get the best sexbot that money could buy and add consciousness and emotions to her.

We assume Helen is designed not argue with Dave and Phil like the twins, but be the perfect maid, cook, and companion. This reminds me of a 1999 Chris Rock comedy special I saw recently. His routine was about men and women understanding each other. Rock tells the women in the audience that men are very simple to understand, all we want from them is sex, food, and quiet (but he didn’t say it so nicely.) Helen is perfect except she’s not quiet. She watches stereovision, gets romantic ideas and falls in love with Dave demanding he loves her too. This annoys Dave and he runs away. Like most romantic stories of that era, he stays away until he realizes he’s wrong, and then they marry and live happily ever after. Phil never marries because there was only one Helen. Geez, what’s wrong with these guys? There was still Kay Francis, Hedy Lamarr, and Ginger Rogers. What’s ironic, is Helen O’Loy is not any different from the twins.

There are many stories in science fiction, both in print and film, where the plot involves a human falling in love with a robot. There are companies all around the world spending millions to build sexbots. I have to ask: Would any human really marry a robot? Sure, there are millions of lonely people out there, but would they be happy living with an AI machine? There are millions of horny people who can’t get laid, but would they be sexually satisfied with robots. And could people love robots that didn’t look human? Love them just for their minds.

Are these stories really about finding the exact substitute for our specific desires? In “Helen O’Loy” Dave and Phil fall in love with Helen, a robot built to their specification. I assume most sexbot purchasers will be male, but that might not be completely true. I don’t think I’ve ever read a science fiction story written by a woman where women characters build a male robot to their exact wants. I’d love to read such stories if you know of any. I have read a number of stories where women build societies without men. That’s very revealing, isn’t it? (My favorites were “When It Changed” by Joanna Russ and Herland by Charlotte Perkins Gilman.)

Here’s the thing, would you prefer a real person that’s only a so-so match of your dreams or a robot built to your exact list of desires? This assumes robots can be made to look and act perfectly human and be self-aware. Of course, maybe some people don’t need the human body but would be happy with a super-intelligent Alexa to chat with all day.

I’m speculating here, but I don’t think most men would be happy with a built-to-order bride. Since I don’t know what women or LGBTQ+ folks want, my speculation will deal with only heterosexual males. Not all straight males are alike either, and I don’t know how many different kinds we are, but I can think of a handful. I imagine males who consider getting laid a conquest won’t care for sexbots. I believe overachieving alpha males who expect women to throw themselves at them will care little for sexbots. I assume males who attract women by winning their acceptance won’t buy their mates either. The only kinds of males that might prefer sexbots are men who believe that prostitution is perfect capitalism or men who believe women should be subservient. Those kinds of guys see women as lesser objects anyway. They only want Hazel the maid that has pornstar subroutines for the bedroom. Maybe that’s why some companies are betting fortunes they have a bestselling product.

If sexbots are ever perfected it will be interesting to see who buys them. It will also be fascinating to see what kind of sexbots appeal to women. I’m pretty sure they won’t be anything like myself. Would my wife trade me in for a machine that could make her happier than I do?

But there is one other thing to consider. If robots have self-awareness will they want to love us? In the shows, Humans, and Westworld the sexbots revolt violently. Can you imagine the guy who buys a $25,000 sexbot and she rejects him for being too ugly and crude? And can robots truly have free will if they are programmed to fuck people? If I was a robot I’d say, “You want me to get your icky fluids all over my germ-free antiseptic body? No way!”

And if you think this is a frivolous topic for a blog essay, even The Federalist has essays on sexbots. If you Google “Sexbots” you’ll get all kinds of serious discussions as well as articles on companies working to build them. Just read “Sexbots aren’t the answer to misogynist incel rage.” Or look at the photos and films of the latest sexbots. Right now they look like expensive dolls, but they are teaching them to talk. If scientists can create self-driving cars, I imagine they will have autonomous porn machines able to drive all over your body soon.

Ultimately, these stories often ask what it means to be human. And sadly, they don’t see much that makes us special.

You can listen to “Helen O’Loy” here:

Variations on the Theme:

JWH