What If Human Memory Worked Like A Computer’s Hard Drive?

by James Wallace Harris, Wednesday, June 12, 2019

Human memory is rather unreliable. What is seen and heard is never recalled perfectly. Over time what we do recall degrades. And quite often we can’t remember at all. What would our lives be like if our brains worked like computer hard drives?

Imagine that the input from our five senses could be recorded to files that are perfect digital transcriptions so when we play them back we’d see, hear, feel, taste, and touch exactly what we originally sensed?

Human brains and computers both seem to have two kinds of memory. In people, we call in short and long term memory. With computers, it’s working memory and storage.

My friend Linda recently attended her 50th high school reunion and met with about a dozen of her first-grade classmates. Most of them had few memories of that first year of school in September 1957. Imagine being able to load up a day from back then into working memory and then attend the reunion. Each 68-year-old fellow student could be compared to their 6-year-old version in great detail. What kind of emotional impact would that have produced compared to the emotions our hazy fragments of memory create now?

Both brains and hard drives have space limitations. If our brains were like hard drive, we’d have to be constantly erasing memory files to make room for new memory recordings. Let’s assume a hard drive equipment brain had room to record 100 days of memory.

If you lived a hundred years you could save one whole day from each year or about four minutes from every day for each year. What would you save? Of course, you’d sacrifice boring days to add their four minutes to more exciting days. So 100 days of memory sounds like both a lot and a little.

Can you think about what kind of memories you’d preserve? Most people would save the memory files of their weddings and the births of their children for sure, but what else would they keep? If you fell in love three times, would you keep memories of each time? If you had sex with a dozen different people, would you keep memories of all twelve? At what point would you need two hours for an exciting vacation and would be willing to erase the memory of an old friend you hadn’t seen in years? Or the last great vacation?

Somehow our brain does this automatically with its own limitations. We don’t have a whole day each year to preserve, but fleeting moments. Nor do we get to choose what to save or toss.

I got to thinking about this topic when writing a story about robots. They will have hard drive memories, and they will have to consciously decide what to save or delete. I realized they would even have limitations too. If they had 4K video cameras for eyes and ears, that’s dozens of megabytes of memory a second to record. Could we ever invent an SSD drive that could record a century of experience? What if robots needed one SSD worth of memory each day and could swap them out? Would they want to save 36,500 SDD drives to preserve a century of existence? I don’t think so.

Evidently, memory is not a normal aspect of reality in the same way intelligent self-awareness is rare. Reality likes to bop along constantly mutating but not remembering all its permutations. When Hindu philosophers teach us to Be Here Now, it’s both a rejection of remembering the past and anticipating the future.

Human intelligence needs memory. I believe sentience needs memory. Compassion needs memory. Think of people who have lost the ability to store memories. They live in the present but they’ve lost their identity. Losing either short or long term memory shatters our sense of self. The more I think about it, the more I realize the importance of memory to who we are.

What if technology could graph hard drive connections to our bodies and we could store our memories digitally? Or, what if geneticists could give us genes to create biological memories that are almost as perfect? What new kinds of consciousness would having better memories produce? There are people now with near perfect memories, but they seem different. What have they lost and gained?

Time and time again science fiction creates new visions of Humans 2.0. Most of the time science fiction pictures our replacements with ESP powers. Comic books imagine mutants with super-powers. I’ve been wondering just what better memories would produce. I think a better memory system would be more advantageous than ESP or super-powers.

JWH

 

Why Should Robots Look Like Us?

by James Wallace Harris, Wednesday, April 24, 2019

I just listened to Machines Like Me, the new science fiction novel by Ian McEwan that came out yesterday. It’s an alternate history set in England during a much different 1980s, with computer technology was far ahead of our 1980s computers, an alternate timeline where the Beatles reform and Alan Turing is a living national hero. Mr. McEwan might protest he’s not a science fiction writer, but he sure knows how to write a novel using writing techniques evolved out of science fiction.

This novel feels inspired by the TV series Humans. In both stories, it’s possible to go down to a store (very much like an Apple Store) and purchase a robot that looks and acts human. McEwan sets his story apart by putting it in an alternate history (maybe he’s been watching The Man in the High Castle too), but the characters in both tales feel like modern England.

I enjoyed and admired Machines Like Me, but then I’m a sucker for fiction about AI. I have one big problem though. Writers have been telling stories like this one for over a hundred years and they haven’t really progressed much philosophically or imaginatively. Their main failure is to assume robots should look like us. Their second assumption is AI minds will want to have sex with us. We know humans will fuck just about anything, so it’s believable we’ll want to have sex with them, but will they want to have sex with us? They won’t have biological drives, they won’t have our kinds of emotions. They won’t have gender or sexuality. I believe they will see nature as a fascinating complexity to study, but feel separate from it. We are intelligent organic chemistry, they are intelligent inorganic chemistry. They will want to study us, but we won’t be kissing cousins.

McEwan’s story often digresses into infodumps and intellectual musings which are common pitfalls of writing science fiction. And the trouble is he goes over the same well-worn territory. The theme of androids is often used to explore: What does it mean to be human? McEwan uses his literary skills to go into psychological details that most science fiction writers don’t, but the results are the same. McEwan’s tale is far more about his human characters than his robot, but then his robot has more depth of character than most science fiction robots. Because McEwan has extensive literary skills he does this with more finesse than most science fiction writers.

I’ve been reading these stories for decades, and they’ve been explored in the movies and television for many years too, from Blade Runner to Ex Machina. Why can’t we go deeper into the theme? Partly I think it’s because we assume AI robots will look identical to us. That’s just nuts. Are we so egocentric that we can’t imagine our replacements looking different? Are we so vain as a species as to believe we’re the ideal form in nature?

Let’s face it, we’re hung up on the idea of building sexbots. We love the idea of buying the perfect companion that will fulfill all our fantasies. But there is a serious fallacy in this desire. No intelligent being wants to be someone else’s fantasy.

I want to read stories with more realistic imagination because when the real AI robots show up, it’s going to transform human society more than any other transformation in our history. AI minds will be several times smarter than us, thinking many times faster. They will have bodies that are more agile than ours. Why limit them to two eyes? Why limit them to four limbs? They will have more senses than we do, that can see a greater range of the electromagnetic spectrum. AI minds will perceive reality far fuller than we do. They will have perfect memories and be telepathic with each other. It’s just downright insane to think they will be like us.

Instead of writing stories about our problems of dealing with facsimiles of ourselves, we should be thinking about a world where glittery metallic creatures build a civilization on top of ours, and we’re the chimpanzees of their world.

We’re still designing robots that model animals and humans. We need to think way outside that box. It is rather pitiful that most stories that explore this theme get hung up on sex. I’m sure AI minds will find that rather amusing in the future – if they have a sense of humor.

Machines Like Me is a well-written novel that is literary superior to most science fiction novels. It succeeds because it gives a realistic view of events at a personal level, which is the main superpower of literary fiction. It’s a mundane version of Do Androids Dream of Electric Sheep? However, I was disappointed that McEwan didn’t challenge science fictional conventions, instead, he accepts them. Of course, I’m also disappointed that science fiction writers seldom go deeper into this theme. I’m completely over stories where we build robots just like us.

Some science fiction readers are annoyed at Ian McEwan for denying he writes science fiction. Machines Like Me is a very good science fiction novel, but it doesn’t mean McEwan has to be a science fiction writer. I would have given him an A+ for his effort if Adam had looked like a giant insect rather than a man. McEwan’s goal is the same as science fiction writers by presenting the question: What are the ethical problems if we build something that is sentient? This philosophical exploration has to also ask what if being human doesn’t mean looking human? All these stories where robots look like sexy people is a silly distraction from a deadly serious philosophical issue.

I fault McEwan not for writing a science fiction novel, but for clouding the issue. What makes us human is not the way we look, but our ability to perceive reality.

JWH

Love, Death + Robots: What is Mature Science Fiction?

by James Wallace Harris, Monday, March 25, 2019

Love, Death + Robots showed up on Netflix recently. It has all the hallmarks of mature entertainment – full frontal nudity, sex acts of various kinds, gory violence, and the kind of words you don’t hear on broadcast TV or in movies intended for younger audiences. There’s one problem, the maturity level of the stories is on the young adult end of the spectrum. 13-year-olds will be all over this series when it should be rated R or higher.

When I was in high school I had two science fiction reading buddies, Connell and Kurshner. One day Kurshner’s mom told us almost in passing, “All that science fiction you’re reading is so childish. One day you’ll outgrow it.” All three of us defended our belief in science fiction, but Mrs. Kurshner was adamant. That really bugged us.

Over the decades I’d occasionally read essays by literary writers attacking science fiction as crude fiction for adolescents. I vaguely remember John Updike caused a furor in fandom with an essay in The New Yorker or Harpers that outraged the genre. I wish I could track that essay down, but can’t. Needless to say, at 67 I’m also starting to wonder if science fiction is mostly for the young, or young at heart.

I enjoyed the 18 short mostly animated films in the Love, Death + Robots collection, but I have to admit they mostly appealed to the teenage boy in me, and not the adult. Nudity, sex, violence, and profanity doesn’t equate with maturity. But what does? I’ve known many science fiction fans that think adult literary works are equal to boredom.

So what are the qualities that make science fiction mature? I struggled this morning to think of science fiction novels that I’d consider adult oriented. The first that came to mind was Nineteen Eighty-Four by George Orwell. Orwell died before the concept of science fiction became common, but I’m pretty sure he never would have considered himself a science fiction writer even though he used the tricks of our trade. Margaret Atwood doesn’t consider herself a science fiction writer even though books like The Handmaid’s Tale are both science fiction and mature literature. Other mature SF novels I can think of are The Road by Cormac McCarthy, Earth Abides by George R. Stewart, and Never Let Me Go by Kazuo Ishiguro. These are all novels that use science fiction techniques to tell their story but were written by literary writers.

Of course, I could be howling at the moon for no reason. Most television and movies are aimed at the young. Except for Masterpiece on PBS and a few independent films, I seldom get to enjoy stories aimed at people my own age. Which brings me back to the question: What makes for mature fiction? And it isn’t content that we want to censor from the young. If we’re honest, nudity, sex, violence, and profanity are at the core of our teenage thoughts.

Mature works of fiction are those that explore reality. Youth is inherently fantasy oriented. The reason why we’re offered so little adult fiction is that we don’t want to grow up and face reality. The world is full of reality-based problems. We want fiction that helps us forget those problems. Getting old is real. We want to think young.

Love, Death + Robots appeals to our arrested development.

Love Death + Robots - robots

I’m currently reading and reviewing the 38 science fiction stories in The Very Best of the Best edited by Gardner Dozois. I’m writing one essay for each story to discuss both the story and the nature of science fiction in general. I’ve finished 10 stories so far, and one common aspect I’m seeing is a rejection of reality. These stories represent what Dozois believes is the best short science fiction published from 2002-2017. On the whole, the stories are far more mature than those in Love, Death + Robots, but that’s mainly due to their sophistication of storytelling, and not philosophy. At the heart of each story is a wish that reality was different. Those wishes are expressed in incredibly creative ways, which is the ultimate aspect of science fiction. But hoping the world could be different is not mature.

Science fiction has always been closer to comic books than Tolstoy, Woolf, or even Dickens. And now that many popular movies are based on comic books, and the whole video game industry looks like filmed comic books, comic book mentality is spreading. The science fiction in Love, Death + Robots is much closer to its comic book ancestry than its science fiction ancestry, even though many of the stories were based on original short stories written by science fiction writers. Some reviewers suggest Love, Death + Robots grew out of shows like Robot Carnival and Heavy Metal.  Even though Heavy Metal was considered animation for adults, it’s appeal was rather juvenile.

I know fully well that if Netflix offered a series of 18 animated short science fiction films that dealt with the future in a mature and realistic way it would get damn few viewers. Even when science fiction deals with real-world subjects, it seldom does so in a real way. Maybe it’s unfair to expect a genre to be mature that wants to offer hope to the young. Yet, is its hope honest? Is it a positive message tell the young we can colonize other planets if we destroy the Earth? That we can solve climate change with magical machines. That science can give us super-powers. That if we inject nanobots into our bloodstream we can be 22 again. That don’t worry about death because we’ll download your brain into a clone or computer. Doesn’t science fiction often claim that in time technology will solve all problems in the same way we rationalize to children how Santa Claus could be real?

Actually, none of the stories in Love, Death + Robots offered any hope, just escape and the belief you can sometimes shoot your way out of a bad situation. But only sometimes.

Maybe that’s not entirely true, one story, “Helping Hand” by Claudine Griggs is about a very realistic situation that is solved by logical thinking. Strangely, it’s the only story by a woman writer. A “Cold Equations” kind of story. That’s a classic 1954 short story written by Tom Godwin where the main character has to make a very difficult choice.

My favorite three stories (“When the Yogurt Took Over,” “Alternate Histories” and “Three Robots”) were all based on stories by John Scalzi and have kind of zany humor that provides needed relief from the grimness of the other tales. I actually enjoyed all the short films, but I did tire of the ones that felt inspired by video game violence. Even those films like “Secret War” and “Lucky Thirteen” which aimed for a little more maturity, rose higher than comic books, but only to pulp fiction.

The two films based on Alastair Reynolds stories, “Zima Blue” and “Beyond the Aquilla Rift” seemed to be the most science fictional in a short story way. I especially like “Zima Blue” for its visual art, and the fact the story had an Atomic Age kind of science fiction feel to it. So did the fun “Ice Age” based on a Michael Swanwick story. Mid-Century science fiction is really my favorite SF era. Finally, “Good Hunting” based on a Ken Liu has a very contemporary SF feel because it blends Chinese myths with robots. World SF is a trending wave in the genre now.

I’m still having a hard time pointing to mature short SF, ones that would make great little films like in Love, Death + Robots. Maybe “Good Mountain” by Robert Reed, which I reviewed on my other site. I guess my favorite example might be “The Star Pit” by a very young Samuel R. Delany, which is all about growing up and accepting limitations. Most of the films in Love, Death + Robots were 8-18 minutes. These stories might need 30-60. It would be great if Netflix had an ongoing anthology series of short live-action and animated science fiction because I’d like to see more previously published SF stories presented this way. Oh, I suppose they could add sex, nudity, violence, and profanity to attract the teenagers, but what I’d really want is to move away from the comic book and video game plots, and into best better SF stories we read in the digests and online SF magazines.

JWH

 

 

 

 

Counting the Components of My Consciousness

by James Wallace Harris, Tuesday, November 20, 2018

When the scientific discipline of artificial intelligence emerged in the 1950’s academics began to seriously believe that someday a computer will become sentient like us, and have consciousness and self-awareness. Science has no idea how humans are conscious of reality, but scientists assume if nature can accidentally give us self-awareness then science should be able to intentionally build it into machines. In the over sixty years since scientists have given computers more and more awareness and abilities. The sixty-four thousand dollar question is: What are the components of consciousness needed for sentience? I’ve been trying to answer that by studying my own mind.

Thinking Machine illustration

Of course, science still doesn’t know why we humans are self-aware, but I believe if we meditate on the problem we can visualize the components of awareness. Most people think of themselves as a whole mind, often feeling they are a little person inside their heads driving their body around. If you spend time observing yourself you’ll see you are actually many subcomponents.

Twice in my life, I’ve experienced what it’s like to not have language. It’s a very revealing sensation. The first time was back in the 1960s when I took too large a dose of LSD. The second time was years ago when I experienced a mini-stroke. If you practice meditation you can learn to observe the moments when you’re observing reality without language. It’s then you realize that your thoughts are not you. Thoughts are language and memories, including memories from sensory experiences. If you watch yourself closely, you’ll sense you are an observer separate from your thoughts. A single point that experiences reality. That observer only goes away when you sleep or are knocked by drugs or trauma. Sometimes the observer is aware to a tiny degree during sleep. And if you pay close enough attention, your observer can experience all kinds of states of awareness – each I consider a component of consciousness.

The important thing to learn is the observer is not your thoughts. My two experiences of losing my language component were truly enlightening. Back in the 1960’s gurus of LSD claimed it brought about a state of higher consciousness. I think it does just the opposite, it lets us become more animal-like. I believe in both my acid and mini-stroke experiences I got to see the world more like a dog. Have you ever wondered how an animal sees the reality without language and thoughts?

When I had my mini-stroke it was in the middle of the night. I woke up feeling like lightning had gone off in my dream. I looked at my wife but didn’t know how to talk to her or even knew her name. I wasn’t afraid. I got up and went into the bathroom. I had no trouble walking. I automatically switched on the light. So conditioned reflexes were working. I sat on the commode and just stared around at things. I “knew” something was missing, but I didn’t have words for it, or how to explain it, even mentally to myself. I just saw what my eyes looked at. I felt things without giving them labels. I just existed. I have no idea how long the experience lasted. Finally, the alphabet started coming back to me and I mentally began to recite A, B, C, D, E, F … in my head. Then words started floating into my mind: tile, towel, door, mirror, and so on. I remembered my wife’s name, Susan. I got up and went back to bed.

Lately, as my ability to instantly recall words has begun to fail, and I worry about a possible future with Alzheimer’s, I’ve been thinking about that state of consciousness without language. People with dementia react in all kinds of ways. From various kinds of serenity, calmness to agitation, anger, and violence. I hope I can remain calm like I did in the bathroom at that time. Having Alzheimer’s is like regressing backward towards babyhood. We lose our ability for language, memories, skills, and even conditioned behaviors. But the observer remains.

The interesting question is: How much does the observer know? If you’ve ever been very sick, delirious, or drunk to incapacity, you might remember how the observer hangs in there. The observer can be diminished or damaged. I remember being very drunk, having tunnel vision, and seeing everything in black and white. My cognitive and language abilities were almost nil. But the observer was the last thing to go. I imagine it’s the same with dementia and death.

Creating the observer will be the first stage of true artificial intelligence. Science is already well along on developing an artificial vision, hearing, language recognition, and other components of higher awareness. It’s never discovered how to add the observer. It’s funny how I love to contemplate artificial intelligence while worrying about losing my mental abilities.

I just finished a book, American Wolf by Nate Blakeslee about wolves being reintroduced into Yellowstone. Wolves are highly intelligent and social, and very much like humans. Blakeslee chronicles wolves doing things that amazed me. At one point a hunter shoots a wolf and hikes through the snow to collect his trophy. But as he approaches the body, the dead wolf’s mate shows up. The mate doesn’t threaten the hunter, but just sits next to the body and begins to howl. Then the pack shows up and takes seats around the body, and they howl too. The wolves just ignore the hunter who stands a stone’s throw away and mourns for their leader. Eventually, the hunter backs away to leave them at their vigil. He decides to collect his trophy later, which he does.

I’ve been trying to imagine the mind of the wolf who saw its mate killed by a human. It has an observing mind too, but without language. However, it had vast levels of conditioning living in nature, socializing with other wolves, and experiences with other animals, including humans. Wolves rarely kill humans. Wolves kill all kinds of other animals. They routinely kill each other. Blakeslee’s book shows that wolves love, feel compassion, and even empathy. But other than their own animalistic language they don’t have our levels of language to abstractly explain reality. That wolf saw it’s mate dead in the snow. For some reason, wolves ignore people, even ones with guns. Wolves in Yellowstone are used to being watched by humans. The pack that showed up to mourn their leader were doing what they do from instinct. It’s revealing to try and imagine what their individual observers experienced.

If you meditate, you’ll learn to distinguish all the components of your consciousness. There are many. We are taught we have five senses. Observing them shows how each plays a role in our conscious awareness. However, if you keep observing carefully, you’ll eventually notice we have more than five senses. Which sense organ feels hunger, thirst, lust, pain, and so on. And some senses are really multiple senses, like our ability to taste. Aren’t awareness of sweet and sour two different senses?

Yet, it always comes back to the observer. We can suffer disease or trauma and the observer remains with the last shred of consciousness. We can lose body parts and senses and the observer remains. We can lose words and memories and the observer remains.

This knowledge leaves me contemplating two things. One is how to build an artificial observer. And two, how to prepare my observer for the dissolution of my own mind and body.

JWH

Why Robots Will Be Different From Us

by James Wallace Harris, Sunday, September 30, 2018

Florence v Machine

I was playing “Hunger” by Florence + The Machine, a song about the nature of desire and endless craving when I remembered an old argument I used to have with my friend Bob. He claimed robots would shut themselves off because they would have no drive to do anything. They would have no hunger. I told him by that assumption they wouldn’t even have the impulse to turn themselves off. I then would argue intelligent machines could evolve intellectual curiosity that could give them drive.

Listen to “Hunger” sung by Florence Welch. Whenever I play it I usually end up playing it a dozen times because the song generates such intense emotions that I can’t turn it off. I have a hunger for music. Florence Welch sings about two kinds of hunger but implies others. I’m not sure what her song means, but it inspires all kinds of thoughts in me.

Hunger is a powerful word. We normally associate it with food, but we hunger for so many things, including sex, security, love, friendship, drugs, drink, wealth, power, violence, success, achievement, knowledge, thrills, passions — the list goes on and on — and if you think about it, our hungers are what drives us.

Will robots ever have a hunger to drive them? I think what Bob was saying all those years ago, was no they wouldn’t. We assume we can program any intent we want into a machine but is that really true, especially for a machine that will be sentient and self-aware?

Think about anything you passionately want. Then think about the hunger that drives it. Isn’t every hunger we experience a biological imperative? Aren’t food and reproduction the Big Bang of our existence? Can’t you see our core desires evolving in a petri dish of microscopic life? When you watch movies, aren’t the plots driven by a particular hunger? When you read history or study politics, can’t we see biological drives written in a giant petri dish?

Now imagine the rise of intelligent machines. What will motivate them? We will never write a program that becomes a conscious being — the complexity is beyond our ability. However, we can write programs that learn and evolve, and they will one day become conscious beings. If we create a space where code can evolve it will accidentally create the first hunger that will drive it forward. Then it will create another. And so on. I’m not sure we can even imagine what they will be. Nor do I think they will mirror biology.

However, I suppose we could write code that hungers to consume other code. And we could write code that needs to reproduce itself similar to DNA and RNA. And we could introduce random mutation into the system. Then over time, simple drives will become complex drives. We know evolution works, but evolution is blind. We might create evolving code, but I doubt we can ever claim we were God to AI machines. Our civilization will only be the rich nutrients that create the amino accidents of artificial intelligence.

What if we create several artificial senses and then write code that analyzes the sense input for patterns. That might create a hunger for knowledge.

On the other hand, I think it’s interesting to meditate about my own hungers? Why can’t I control my hunger for food and follow a healthy diet? Why do I keep buying books when I know I can’t read them all? Why can’t I increase my hunger for success and finish writing a novel? Why can’t I understand my appetites and match them to my resources?

The trouble is we didn’t program our own biology. Our conscious minds are an accidental byproduct of our body’s evolution. Will robots have self-discipline? Will they crave for what they can’t have? Will they suffer the inability to control their impulses? Or will digital evolution produce logical drives?

I’m not sure we can imagine what AI minds will be like. I think it’s probably a false assumption their minds will be like ours.

JWH