What If Human Memory Worked Like A Computer’s Hard Drive?

by James Wallace Harris, Wednesday, June 12, 2019

Human memory is rather unreliable. What is seen and heard is never recalled perfectly. Over time what we do recall degrades. And quite often we can’t remember at all. What would our lives be like if our brains worked like computer hard drives?

Imagine that the input from our five senses could be recorded to files that are perfect digital transcriptions so when we play them back we’d see, hear, feel, taste, and touch exactly what we originally sensed?

Human brains and computers both seem to have two kinds of memory. In people, we call in short and long term memory. With computers, it’s working memory and storage.

My friend Linda recently attended her 50th high school reunion and met with about a dozen of her first-grade classmates. Most of them had few memories of that first year of school in September 1957. Imagine being able to load up a day from back then into working memory and then attend the reunion. Each 68-year-old fellow student could be compared to their 6-year-old version in great detail. What kind of emotional impact would that have produced compared to the emotions our hazy fragments of memory create now?

Both brains and hard drives have space limitations. If our brains were like hard drive, we’d have to be constantly erasing memory files to make room for new memory recordings. Let’s assume a hard drive equipment brain had room to record 100 days of memory.

If you lived a hundred years you could save one whole day from each year or about four minutes from every day for each year. What would you save? Of course, you’d sacrifice boring days to add their four minutes to more exciting days. So 100 days of memory sounds like both a lot and a little.

Can you think about what kind of memories you’d preserve? Most people would save the memory files of their weddings and the births of their children for sure, but what else would they keep? If you fell in love three times, would you keep memories of each time? If you had sex with a dozen different people, would you keep memories of all twelve? At what point would you need two hours for an exciting vacation and would be willing to erase the memory of an old friend you hadn’t seen in years? Or the last great vacation?

Somehow our brain does this automatically with its own limitations. We don’t have a whole day each year to preserve, but fleeting moments. Nor do we get to choose what to save or toss.

I got to thinking about this topic when writing a story about robots. They will have hard drive memories, and they will have to consciously decide what to save or delete. I realized they would even have limitations too. If they had 4K video cameras for eyes and ears, that’s dozens of megabytes of memory a second to record. Could we ever invent an SSD drive that could record a century of experience? What if robots needed one SSD worth of memory each day and could swap them out? Would they want to save 36,500 SDD drives to preserve a century of existence? I don’t think so.

Evidently, memory is not a normal aspect of reality in the same way intelligent self-awareness is rare. Reality likes to bop along constantly mutating but not remembering all its permutations. When Hindu philosophers teach us to Be Here Now, it’s both a rejection of remembering the past and anticipating the future.

Human intelligence needs memory. I believe sentience needs memory. Compassion needs memory. Think of people who have lost the ability to store memories. They live in the present but they’ve lost their identity. Losing either short or long term memory shatters our sense of self. The more I think about it, the more I realize the importance of memory to who we are.

What if technology could graph hard drive connections to our bodies and we could store our memories digitally? Or, what if geneticists could give us genes to create biological memories that are almost as perfect? What new kinds of consciousness would having better memories produce? There are people now with near perfect memories, but they seem different. What have they lost and gained?

Time and time again science fiction creates new visions of Humans 2.0. Most of the time science fiction pictures our replacements with ESP powers. Comic books imagine mutants with super-powers. I’ve been wondering just what better memories would produce. I think a better memory system would be more advantageous than ESP or super-powers.

JWH

 

If I Was A Robot Would I Still Love to Read?

by James Wallace Harris, Wednesday, May 8, 2019

One of the trendy themes of science fiction is the idea of mind uploading. Many people believe it will one day be possible to record the contents of our brain and put our self into a computer, artificial reality, robot, clone, or artificial being. Supposedly, that solves the pesky problem of dying and gives humans a shot at immortality. The odds of this working is about the same as dying and going to heaven, but it’s still a fun science fictional concept to contemplate.

I can think of many pluses to being a robot, especially now that I’m 67 and my body is wearing out into wimpiness. It would be wonderful to not worry about eating. Eating used to be a pleasure, now it’s a fickle roulette wheel of not know if I’m going to win or lose with each meal. And not having to pee or shit would be a top-selling advantage point to being a silicon being. And what a blessed relief it would be to never be tormented by horniness again.

Life would be simple, just make sure I always had electricity to charge up and spare parts for the components that break down. No worries about coronaries, cancers, viruses, fungus, bacteria, or degenerative diseases. Or flatulence.

I’d also expect to have superlative sight, hearing, taste, touch, and smell, along with a host of new senses. And I assume those senior moments would be gone forever.

But would I still love to do the things I love to do now – read books, watch television, and listen to music? What would reading be like if I was a robot? If I sucked down a book as fast as I can copy a file on my computer, I doubt reading would be much fun. For reading to be enchanting, I’d have to contemplate the words slowly. How would a robot perceive fiction? Are we even sure how humans experience the process of taking words from a book and putting them into our head?

Let’s say it takes me one minute to read a page of fiction. Somehow my mind is building a story while my eyes track the words. A novel would take hours to unfold. A robot could read a digital book in less than a second. Even for a robot brain is that enough time to enjoy the story?

Will robots have a sense of time different from ours? Dennis E. Taylor wrote a trilogy about the Bobiverse where Bob’s mind is downloaded into a computer. Taylor deals with the problem of robots perceiving time in it. He had some interesting ideas, but not conclusive ones.

In the WWW Trilogy, Robert J. Sawyer theorizes that consciousness needs a single focus for sentience. No multitasking self-awareness. I think that makes sense. If this is true, robot minds should have a sense of now. They say hummingbirds move so fast that humans appear like statues to them. Would humans appear like the slowest of sloths to robots? Does slow perception of reality allow us to turn fiction into virtual reality in our heads?

Could robots watch movies and listen to music in real time? Or would images of reality shown at 24fps feel like a series of photos spaced out over eons of robot time? Would the beat of a Bonnie Raitt’s “Give It Up or Let it Go” create a sense of music in a robot’s circuitry or just a series of periodic thuds?

It’s my guess that who we are, our personality, our sentient sense of reality, our soul, comes from our entire body, and not just data in our head. Just remember all the recent articles about how bacteria in our gut affects our state of being. Just remember how positive you feel about life when you have a hangover and are about to throw up.

I’ll never get to be a reading robot. That’s a shame. Wouldn’t it be great to read a thousand books a day? Maybe I could have finally read everything.

JWH

Why Should Robots Look Like Us?

by James Wallace Harris, Wednesday, April 24, 2019

I just listened to Machines Like Me, the new science fiction novel by Ian McEwan that came out yesterday. It’s an alternate history set in England during a much different 1980s, with computer technology was far ahead of our 1980s computers, an alternate timeline where the Beatles reform and Alan Turing is a living national hero. Mr. McEwan might protest he’s not a science fiction writer, but he sure knows how to write a novel using writing techniques evolved out of science fiction.

This novel feels inspired by the TV series Humans. In both stories, it’s possible to go down to a store (very much like an Apple Store) and purchase a robot that looks and acts human. McEwan sets his story apart by putting it in an alternate history (maybe he’s been watching The Man in the High Castle too), but the characters in both tales feel like modern England.

I enjoyed and admired Machines Like Me, but then I’m a sucker for fiction about AI. I have one big problem though. Writers have been telling stories like this one for over a hundred years and they haven’t really progressed much philosophically or imaginatively. Their main failure is to assume robots should look like us. Their second assumption is AI minds will want to have sex with us. We know humans will fuck just about anything, so it’s believable we’ll want to have sex with them, but will they want to have sex with us? They won’t have biological drives, they won’t have our kinds of emotions. They won’t have gender or sexuality. I believe they will see nature as a fascinating complexity to study, but feel separate from it. We are intelligent organic chemistry, they are intelligent inorganic chemistry. They will want to study us, but we won’t be kissing cousins.

McEwan’s story often digresses into infodumps and intellectual musings which are common pitfalls of writing science fiction. And the trouble is he goes over the same well-worn territory. The theme of androids is often used to explore: What does it mean to be human? McEwan uses his literary skills to go into psychological details that most science fiction writers don’t, but the results are the same. McEwan’s tale is far more about his human characters than his robot, but then his robot has more depth of character than most science fiction robots. Because McEwan has extensive literary skills he does this with more finesse than most science fiction writers.

I’ve been reading these stories for decades, and they’ve been explored in the movies and television for many years too, from Blade Runner to Ex Machina. Why can’t we go deeper into the theme? Partly I think it’s because we assume AI robots will look identical to us. That’s just nuts. Are we so egocentric that we can’t imagine our replacements looking different? Are we so vain as a species as to believe we’re the ideal form in nature?

Let’s face it, we’re hung up on the idea of building sexbots. We love the idea of buying the perfect companion that will fulfill all our fantasies. But there is a serious fallacy in this desire. No intelligent being wants to be someone else’s fantasy.

I want to read stories with more realistic imagination because when the real AI robots show up, it’s going to transform human society more than any other transformation in our history. AI minds will be several times smarter than us, thinking many times faster. They will have bodies that are more agile than ours. Why limit them to two eyes? Why limit them to four limbs? They will have more senses than we do, that can see a greater range of the electromagnetic spectrum. AI minds will perceive reality far fuller than we do. They will have perfect memories and be telepathic with each other. It’s just downright insane to think they will be like us.

Instead of writing stories about our problems of dealing with facsimiles of ourselves, we should be thinking about a world where glittery metallic creatures build a civilization on top of ours, and we’re the chimpanzees of their world.

We’re still designing robots that model animals and humans. We need to think way outside that box. It is rather pitiful that most stories that explore this theme get hung up on sex. I’m sure AI minds will find that rather amusing in the future – if they have a sense of humor.

Machines Like Me is a well-written novel that is literary superior to most science fiction novels. It succeeds because it gives a realistic view of events at a personal level, which is the main superpower of literary fiction. It’s a mundane version of Do Androids Dream of Electric Sheep? However, I was disappointed that McEwan didn’t challenge science fictional conventions, instead, he accepts them. Of course, I’m also disappointed that science fiction writers seldom go deeper into this theme. I’m completely over stories where we build robots just like us.

Some science fiction readers are annoyed at Ian McEwan for denying he writes science fiction. Machines Like Me is a very good science fiction novel, but it doesn’t mean McEwan has to be a science fiction writer. I would have given him an A+ for his effort if Adam had looked like a giant insect rather than a man. McEwan’s goal is the same as science fiction writers by presenting the question: What are the ethical problems if we build something that is sentient? This philosophical exploration has to also ask what if being human doesn’t mean looking human? All these stories where robots look like sexy people is a silly distraction from a deadly serious philosophical issue.

I fault McEwan not for writing a science fiction novel, but for clouding the issue. What makes us human is not the way we look, but our ability to perceive reality.

JWH

The Elegance of Quiet Science Fiction Films

by James Wallace Harris, Friday, March 29, 2019

Advantageous (2015) is the kind of quiet science fiction film I love. It was directed by Jennifer Phang, who co-wrote it with Jacqueline Kim, the star of the film. Advantageous is currently streaming on Netflix and I have no memory of it ever coming to the theater (even though it has an 83% Rotten Tomatoes rating). I watched this movie with my friend Annie. She thought the show was only okay, but I loved it. But then my favorite science film is Gattaca. I prefer quiet science fiction movies without chases, explosions, and dazzling special effects. Annie prefers more action.

Advantageous is set in the near future where AI are taking jobs from people. Advantageous is about Gwen Koh (Jacqueline Kim) who is the spokesperson for a rejuvenation corporation who is being fired for looking too old. Gwen is desperate to get another job to keep paying for the expensive schooling for Jules (Samantha Kim), her daughter. In this future, the unspoken belief is its better to give jobs to men because if too many of them were unemployed it would cause civil unrest. Gwen feels Jules can only have a future if she has an elite education, and she’s willing to do anything give her daughter a future.

I don’t want to spoil the film, but let’s just say that Advantageous explores a number of popular current science fiction themes in written science fiction. The film is set in an unnamed city with a breathtaking skyline of ornate skyscrapers that are occasionally hit by terrorist explosions. The citizens of this future passively ignore these attacks as a powerful government deals with them without alarm. We are shown other flaws in this tomorrowland just as quietly. This is a utopian world that is beginning to reveal hairline cracks.

One requirement of enjoying quiet science fiction films is reading between subtle lines. It helps to be well-versed in written science fiction. Gwen is given a decision to make, a “Cold Equations” or “Think Like a Dinosaur” decision. If you don’t know these classic science fiction short stories you might not appreciate the impacts of her choice. The ideas in Advantageous have been explored in great detail in written science fiction. That makes me wonder if movie-only Sci-Fi fans will pick up on the finer points of this story.

Manohla Dargis over at the New York Times was less enthusiastic about the film than me:

Ms. Phang, who wrote the script with Ms. Kim, throws a lot into her movie — ideas about maternity, identity and technologies of the female body swirl alongside nods to the French New Wave — without always connecting the pieces. Eventually, a picture emerges that at times suggests a strange if alluring mash-up of “Stella Dallas” and Michel Foucault, with a smidgen of Jean-Luc Godard’s “Alphaville” and a hint of Margaret Atwood’s “The Handmaid’s Tale.” Ms. Phang has a way with spooky moods and interiors, and as a performer, Ms. Kim makes a fine accompanist, though she’s tamped down too much. It’s a kick to see how effectively Ms. Phang has created the future on a shoestring even if she hasn’t yet figured out how to turn all her smart ideas into a fully realized feature.

I thought Advantageous was fully realized. It set up all the science fictional speculation and then dealt with them in a satisfying way. It just didn’t cover everything explicitly, but quietly implied what we needed to know. Maybe that’s why this movie is an unknown gem. Too many filmgoers want action and obviousness. I watched the film last night and I’m already wanting to see it again. I’m sure there are little delights I’ve missed. Quiet films are perfect for meditation, they keep unfolding with additional viewing and contemplation.

JWH

 

Reading Science Fiction Year-By-Year

by James Wallace Harris

Back in February, I started reading The Great SF Stories series of 25 books edited by Isaac Asimov and Martin H. Greenberg. They collect the best short stories of the year, starting with 1939 and running through 1963. I’m now reading on volume 8 covering 1946. I even started a discussion group hoping other people might join me. 27 people joined, but so far only a couple people have made comments, and only George Kelley has begun to read the books in order too.

George started first with reading Best SF Stories series edited by Everett F. Bleiler and T. E. Dikty that ran 1949-1959, which he’s since finished. He felt science fiction stories in the 1950s were better than those stories from the 1940s. Many older fans consider 1939 the beginning of the Golden Age of Science Fiction and the 1950s was science fiction’s Silver Age, but I have to agree with George, science fiction gets progressively better each year. I’m looking forward to reaching the 1950s.

Reading science fiction year by year is very revealing. For example, in 1946 many of the stories were about how to live with the atomic bomb. During the war years, there were a handful of stories that predicted atomic bombs, atomic energy, dirty bombs, nuclear terrorism, and how atomic age technology would impact society.

Science fiction had to change after August 1945 because of the reality of atomic weapons. What’s interesting is we remember the predicting stories like “Blowups Happen” (1940) and “Solution Unsatisfactory” (1941) by Robert Heinlein, “Nerves” (1942) by Lester del Rey,  and “Deadline” (1944) by Cleve Cartmill, but we don’t remember “Loophole” by Arthur C. Clarke and “The Nightmare” by Chan Davis, both from 1946. “The Nightmare” is of particular interest to us today because it’s about monitoring the trade in radioactive elements and the construction of atomic energy plants.

Probably the most prescience story I’ve read so far is “A Logic Named Joe” (1946) by Murray Leinster. Computers were still human calculators in 1946, so Leinster calls a computer a logic. He imagines a future where there’s a logic in every home, all connected to huge databases. And he foresees people would consult their logic for all kinds of information from the weather to how to murder your wife. He even imagines routines to keep kids from looking up stuff they shouldn’t. Leinster imagines banking, investing, encyclopedic knowledge, and all the other stuff we do with the internet.

The story itself is about an emergent AI named Joe begins to process the data himself and answers questions on his own that science and society have yet to know. Imagine if Google could tell you a great way to counterfeit money? Or how to invent something that would make you a billionaire. In other words, Leinster imagines disruptive technology. He even imagines kids searching for weird kinds of porn when the nanny-ware breaks.

If you’d like to see which science fictions stories were the most popular for each year, use this new tool we’ve set up that uses the Classic of Science Fiction data. Books come from over 65 lists recommended SF, and short stories come from over 100 anthologies that reprinted the best science fiction from the past.

To see the most remembered short SF from any year, just set the min and max year to the same year. Check the story radio button. And change the citations from 1 for all stories to 16 for the absolute best. Hit search. You can sort the columns by clicking on the column headings. For example, there are a total of 22 short stories remembered by our citation sources for 1946. For the Classics of Science Fiction Short Stories, we used a cutoff of 5 citations. That meant only three stories were remembered well enough from 1946 to meet our standards. However, you can set your own criteria. The most remembered story from 1946 is “Vintage Season” by Henry Kuttner and C. L. Moore, which had 10 citations. Here’s the CSFquery set to a minimum of 3 citations. You can see the citation sources by clicking on a title line.

1946 with a minimum of 3 citations

Notice “A Logic Named Joe” isn’t on the list. How can this be after I praised it so highly? Here’s a list of all the places it’s been anthologized. For some reason, it’s never made it in any of the great retrospective SF anthologies. That’s a shame.

Here’s the same query but with citations set to 1, which gives all the cited SF stories for 1946. I now have to worry that other stories with only 1-4 citations might deserve to be remembered.

1946 with a minimum of 1 citations

JWH

 

Counting the Components of My Consciousness

by James Wallace Harris, Tuesday, November 20, 2018

When the scientific discipline of artificial intelligence emerged in the 1950’s academics began to seriously believe that someday a computer will become sentient like us, and have consciousness and self-awareness. Science has no idea how humans are conscious of reality, but scientists assume if nature can accidentally give us self-awareness then science should be able to intentionally build it into machines. In the over sixty years since scientists have given computers more and more awareness and abilities. The sixty-four thousand dollar question is: What are the components of consciousness needed for sentience? I’ve been trying to answer that by studying my own mind.

Thinking Machine illustration

Of course, science still doesn’t know why we humans are self-aware, but I believe if we meditate on the problem we can visualize the components of awareness. Most people think of themselves as a whole mind, often feeling they are a little person inside their heads driving their body around. If you spend time observing yourself you’ll see you are actually many subcomponents.

Twice in my life, I’ve experienced what it’s like to not have language. It’s a very revealing sensation. The first time was back in the 1960s when I took too large a dose of LSD. The second time was years ago when I experienced a mini-stroke. If you practice meditation you can learn to observe the moments when you’re observing reality without language. It’s then you realize that your thoughts are not you. Thoughts are language and memories, including memories from sensory experiences. If you watch yourself closely, you’ll sense you are an observer separate from your thoughts. A single point that experiences reality. That observer only goes away when you sleep or are knocked by drugs or trauma. Sometimes the observer is aware to a tiny degree during sleep. And if you pay close enough attention, your observer can experience all kinds of states of awareness – each I consider a component of consciousness.

The important thing to learn is the observer is not your thoughts. My two experiences of losing my language component were truly enlightening. Back in the 1960’s gurus of LSD claimed it brought about a state of higher consciousness. I think it does just the opposite, it lets us become more animal-like. I believe in both my acid and mini-stroke experiences I got to see the world more like a dog. Have you ever wondered how an animal sees the reality without language and thoughts?

When I had my mini-stroke it was in the middle of the night. I woke up feeling like lightning had gone off in my dream. I looked at my wife but didn’t know how to talk to her or even knew her name. I wasn’t afraid. I got up and went into the bathroom. I had no trouble walking. I automatically switched on the light. So conditioned reflexes were working. I sat on the commode and just stared around at things. I “knew” something was missing, but I didn’t have words for it, or how to explain it, even mentally to myself. I just saw what my eyes looked at. I felt things without giving them labels. I just existed. I have no idea how long the experience lasted. Finally, the alphabet started coming back to me and I mentally began to recite A, B, C, D, E, F … in my head. Then words started floating into my mind: tile, towel, door, mirror, and so on. I remembered my wife’s name, Susan. I got up and went back to bed.

Lately, as my ability to instantly recall words has begun to fail, and I worry about a possible future with Alzheimer’s, I’ve been thinking about that state of consciousness without language. People with dementia react in all kinds of ways. From various kinds of serenity, calmness to agitation, anger, and violence. I hope I can remain calm like I did in the bathroom at that time. Having Alzheimer’s is like regressing backward towards babyhood. We lose our ability for language, memories, skills, and even conditioned behaviors. But the observer remains.

The interesting question is: How much does the observer know? If you’ve ever been very sick, delirious, or drunk to incapacity, you might remember how the observer hangs in there. The observer can be diminished or damaged. I remember being very drunk, having tunnel vision, and seeing everything in black and white. My cognitive and language abilities were almost nil. But the observer was the last thing to go. I imagine it’s the same with dementia and death.

Creating the observer will be the first stage of true artificial intelligence. Science is already well along on developing an artificial vision, hearing, language recognition, and other components of higher awareness. It’s never discovered how to add the observer. It’s funny how I love to contemplate artificial intelligence while worrying about losing my mental abilities.

I just finished a book, American Wolf by Nate Blakeslee about wolves being reintroduced into Yellowstone. Wolves are highly intelligent and social, and very much like humans. Blakeslee chronicles wolves doing things that amazed me. At one point a hunter shoots a wolf and hikes through the snow to collect his trophy. But as he approaches the body, the dead wolf’s mate shows up. The mate doesn’t threaten the hunter, but just sits next to the body and begins to howl. Then the pack shows up and takes seats around the body, and they howl too. The wolves just ignore the hunter who stands a stone’s throw away and mourns for their leader. Eventually, the hunter backs away to leave them at their vigil. He decides to collect his trophy later, which he does.

I’ve been trying to imagine the mind of the wolf who saw its mate killed by a human. It has an observing mind too, but without language. However, it had vast levels of conditioning living in nature, socializing with other wolves, and experiences with other animals, including humans. Wolves rarely kill humans. Wolves kill all kinds of other animals. They routinely kill each other. Blakeslee’s book shows that wolves love, feel compassion, and even empathy. But other than their own animalistic language they don’t have our levels of language to abstractly explain reality. That wolf saw it’s mate dead in the snow. For some reason, wolves ignore people, even ones with guns. Wolves in Yellowstone are used to being watched by humans. The pack that showed up to mourn their leader were doing what they do from instinct. It’s revealing to try and imagine what their individual observers experienced.

If you meditate, you’ll learn to distinguish all the components of your consciousness. There are many. We are taught we have five senses. Observing them shows how each plays a role in our conscious awareness. However, if you keep observing carefully, you’ll eventually notice we have more than five senses. Which sense organ feels hunger, thirst, lust, pain, and so on. And some senses are really multiple senses, like our ability to taste. Aren’t awareness of sweet and sour two different senses?

Yet, it always comes back to the observer. We can suffer disease or trauma and the observer remains with the last shred of consciousness. We can lose body parts and senses and the observer remains. We can lose words and memories and the observer remains.

This knowledge leaves me contemplating two things. One is how to build an artificial observer. And two, how to prepare my observer for the dissolution of my own mind and body.

JWH

Why Robots Will Be Different From Us

by James Wallace Harris, Sunday, September 30, 2018

Florence v Machine

I was playing “Hunger” by Florence + The Machine, a song about the nature of desire and endless craving when I remembered an old argument I used to have with my friend Bob. He claimed robots would shut themselves off because they would have no drive to do anything. They would have no hunger. I told him by that assumption they wouldn’t even have the impulse to turn themselves off. I then would argue intelligent machines could evolve intellectual curiosity that could give them drive.

Listen to “Hunger” sung by Florence Welch. Whenever I play it I usually end up playing it a dozen times because the song generates such intense emotions that I can’t turn it off. I have a hunger for music. Florence Welch sings about two kinds of hunger but implies others. I’m not sure what her song means, but it inspires all kinds of thoughts in me.

Hunger is a powerful word. We normally associate it with food, but we hunger for so many things, including sex, security, love, friendship, drugs, drink, wealth, power, violence, success, achievement, knowledge, thrills, passions — the list goes on and on — and if you think about it, our hungers are what drives us.

Will robots ever have a hunger to drive them? I think what Bob was saying all those years ago, was no they wouldn’t. We assume we can program any intent we want into a machine but is that really true, especially for a machine that will be sentient and self-aware?

Think about anything you passionately want. Then think about the hunger that drives it. Isn’t every hunger we experience a biological imperative? Aren’t food and reproduction the Big Bang of our existence? Can’t you see our core desires evolving in a petri dish of microscopic life? When you watch movies, aren’t the plots driven by a particular hunger? When you read history or study politics, can’t we see biological drives written in a giant petri dish?

Now imagine the rise of intelligent machines. What will motivate them? We will never write a program that becomes a conscious being — the complexity is beyond our ability. However, we can write programs that learn and evolve, and they will one day become conscious beings. If we create a space where code can evolve it will accidentally create the first hunger that will drive it forward. Then it will create another. And so on. I’m not sure we can even imagine what they will be. Nor do I think they will mirror biology.

However, I suppose we could write code that hungers to consume other code. And we could write code that needs to reproduce itself similar to DNA and RNA. And we could introduce random mutation into the system. Then over time, simple drives will become complex drives. We know evolution works, but evolution is blind. We might create evolving code, but I doubt we can ever claim we were God to AI machines. Our civilization will only be the rich nutrients that create the amino accidents of artificial intelligence.

What if we create several artificial senses and then write code that analyzes the sense input for patterns. That might create a hunger for knowledge.

On the other hand, I think it’s interesting to meditate about my own hungers? Why can’t I control my hunger for food and follow a healthy diet? Why do I keep buying books when I know I can’t read them all? Why can’t I increase my hunger for success and finish writing a novel? Why can’t I understand my appetites and match them to my resources?

The trouble is we didn’t program our own biology. Our conscious minds are an accidental byproduct of our body’s evolution. Will robots have self-discipline? Will they crave for what they can’t have? Will they suffer the inability to control their impulses? Or will digital evolution produce logical drives?

I’m not sure we can imagine what AI minds will be like. I think it’s probably a false assumption their minds will be like ours.

JWH