Thinking Outside of Our Heads

by James Wallace Harris

I believe recent developments in artificial intelligence prove that many of the creative processes we thought came from conscious actions come from unconscious mechanisms in our minds. What we are learning is computer techniques used to generate prose or images are like unconscious processes in the human brain.

The older I get, the more I believe that most of my thinking comes from my subconscious. The more I pay attention to both dreams and my waking thoughts, the more I realize that I’m very rarely making conscious decisions.

I might think “I am going to walk across the street and visit Paul,” but I have no idea how to make my body walk anywhere. But then, I’ve always assumed muscle actions were automatic. It was mental actions I believed were conscious actions. I used to believe “I am writing this essay,” but I no longer believe that. This has led me to ask:

Just what activities do we perform with our conscious minds?

Before the advent of writing, we did all our thinking inside our heads. Homer had to memorize the Iliad to recite it. Prehistory was oral. How much of thought then was conscious or unconscious? Have you ever read The Origins of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes? I know his theories have lots of problems, but they do imagine what I’m thinking about.

How often have you worried over a problem, say a math problem, or a programming problem, and gave up, but then later, usually after a nap or sleep, the solution came to you? That’s the classic view of unconscious thinking. But even when we’re thinking we’re solving a calculus problem is it really being done at a conscious level? Are you consciously recalling all your math lessons over a lifetime to solve the problem?

How often when working on a Wordle or Crossword does the word magically come to you? But sometimes, we are aware of the steps involved.

In recent years I’ve developed a theory that when we work with pen and paper, or word processor or spreadsheet, or any tool outside our body, we’re closer to thinking consciously. Sure, our unconscious minds are helping us, but making a list is more willful than just trying to remember what we need at the store.

Writing an essay is more willful than woolgathering in the La-Z-Boy. Authoring a book is far more willful still. Engineering a submarine by a vast team of people is an even more conscious effort. Sure, it involves a collective of unconscious activity too, but a vast amount documentation must be worked out consciously.

I’ve written before about this idea. See “Thinking Outside Your Head.” That’s where I reviewed different techniques and applications we use to think outside of our heads.

Many people want to deny the recent successes with AI because they want to believe machines can’t do what we do. That humans are special. If you scroll through the images at Midjourney Showcase, it’s almost impossible to deny that some of the images are stunningly beautiful. Some people will claim they are just stolen from human creativity. I don’t think that’s true.

I think AI developers have found ways to train computer programs to act like the human mind. That these programs have stumbled upon the same tricks that the brain evolved. Many great writers and artists often talk about their Muse. I think that’s just a recognition of their unconscious minds at work. What those creative people have learned is how to work consciously with the unconscious.

What some creative people are doing now is consciously working with two unconscious minds – their own and an AI. There is still a conscious component, the act of working with tools outside of our head. Where the action is, is that vague territory between the unconscious mind and the conscious one.

JWH

Memories Imagine the Darndest Things

by James Wallace Harris, 7/10/23

This essay is about remembering something that never happened and the theories I’ve developed to explain my memory hallucination.

While reading The Kindly Ones by Anthony Powell, the sixth novel in a twelve-novel series called A Dance to the Music of Time, I had the constant feeling I had read it before. Several scenes throughout the novel seemed so familiar that I felt like I had studied them over several readings. I always assumed it was because I had twice watched the four-part miniseries based on the books. I’m sure that accounts for the general sense I’ve read The Kindly Ones before, but not the intense sense of remembering specific scenes. Yesterday I replayed the portion of the miniseries that deals with the most remembered scene and it merely skims over a very long detailed scene in the book.

A Dance to the Music of Time is about Nick Jenkins and his life from the 1920s through the 1960s. It’s not a Roman à clef but Anthony Powell based Nick on his own life. It’s a fictional exploration of memory, so it’s rather ironic that I’m having memory problems reading it.

There were many scenes that felt I had read before, but I just assumed they were in the miniseries. However, one scene was intensely vivid and familiar. It was the long scene where Nick Jenkins met Bob Duport years after Nick had had an affair with Duport’s wife Jean That affair was chronicled in an early novel in the series. So those pages recall events that happened in earlier novels, but it also has much new information that wasn’t in the earlier novels. The most vivid scene involved Nick wanting to avoid the subject of Jean, but Bob slowly getting around to talking about her. Bob starts describing the men he knew Jean had affairs with and what they were like. Bob kept making a case that Jean was attracted to men who were assholes and even admits to being one himself. Nick doesn’t know if Bob is intentionally insulting him or accidentally torturing him.

In recent years I have become distrusting of my memory for many reasons. The first is, memories often feel faulty. But that sense of faultiness is the kind we associate with dementia. I’m now exploring memory delusions.

I’ve read a number of books about the limitations of memory, and I’ve come to assume memories are unreliable. The best book I’ve read on this is Jesus Before the Gospels by Bart D. Ehrman. You wouldn’t think a book about Jesus would be the best place to learn about the limitations of memory, but it’s the best I’ve found.

If the television miniseries wasn’t where I acquired my pre-knowledge of that scene in The Kindly Ones, where did it come from? My first thought was to wonder if I had read the book before? I checked my reading log, a listing of books I’ve read since 1983, and it wasn’t there. Now, there have been times when I forgot to record a book read, but I don’t think that happened in this case. Why would I read the sixth book of a series out of order?

Another possibility is I listened to it in my sleep. Books 4-6 are in a combined edition on my Audible edition, a total of 21 hours. Theoretically, I could have fallen asleep and my unconscious mind heard it. This happens all the time. But I wake up, usually, in minutes, but no more than an hour, and shut off the book. I always scroll back to a scene I’m positive I listened to the day before. I’m almost positive I didn’t let this whole book play while I was sleeping with The Kindly Ones. Because of an overactive bladder, the longest stretch I can sleep at night is two hours.

I do have a wild and crazy theory. What if certain human experiences become part of what Jung called our collective unconscious? I know this is New Age woo-woo, but it’s a thought. It might explain why some people think they are reincarnated, or some instances of Deja vu.

I have two less wild theories, ones I think might be closer to the truth. One involves prediction, and the other involves resonating with tiny universal fragments.

The novels in A Dance to the Music of Time feel like an autobiography. The novel series is not a Roman clef, but they were inspired by Powell’s own life and the people he knew. I’m thinking they create such a detailed sense of Nick Jenkins, especially after six novels, that when I got to the scene with Bob, I felt like I was Nick, and the encounter felt so real that I had experienced it as if I was remembering it.

The second theory is somewhat like the basis of holograms. If you cut one up, it will still show the whole picture, just fuzzier. Even a tiny fragment of a hologram will still show the entire image, but just very fuzzy. This second theory suggests that any scene involving a man meeting the husband of the woman he had an affair with will trigger a resonating memory response. I can’t recall any specific similar scene in fiction or real life that matches this, but that doesn’t mean I haven’t and just don’t remember it.

This hologram fragment theory might explain all Deja vu experiences. Our mind remembers things in generalized tokens, and sometimes we confuse the token from one event with another. If you think about this, you’ll probably recall this happening to you. The other day I asked Susan if I had gotten the mail, and she said, yes, you got a book. I said, no, that was yesterday. I was quite positive. I even convinced Susan that it was true. A few hours later I remembered that yesterday was the 4th of July and there was no mail. I have a “got the mail” token in my brain and it makes me feel like I’ve always gotten the mail. But it’s not really specific to any single event of getting the mail.

A recent episode of 60 Minutes on Google’s AI called Bard offers another theory. Bard was asked to explain inflation, which it did, and offered five books on the subject with descriptions of the books. When CBS fact-checked that list days later they discovered the books didn’t exist. CBS asked Google about this. They were told this was an AI phenomenon called hallucination. Evidently, AIs will just make up shit whenever they feel like it. Maybe what I experienced was a memory hallucination.

Google’s Bard performed another scary feat. It taught itself to read and write in a language it wasn’t trained on, and without being asked. Maybe my brain just tricked me into thinking I had read this book before?

And there’s one last idea. Last night I dreamed of a variation of an episode of a TV show Susan and I watched last evening. The dream didn’t involve characters from the TV show, but people I know. But the dream put me, and people I know in the same exact situation. Have you ever wondered how our brain can generate so much endless dream content? What if the same mental mechanism that generates dreams also creates our memories and beliefs? What if that mechanism works like Bard?

I’ve always liked Roman à clef fiction, or fiction that is highly biographical. I’ve always been obsessed with memory. I’m ready to finally read Proust, who is the authorial authority on fictionalizing memory. Some people compare Anthony Powell to Proust, others hate that comparison. Proust fans don’t think Powell was heavy-duty enough. I think they each had their own approach to remembering their life. Powell may have been an extrovert and Proust an introvert, and the differences in their prose were caused by that and not the quality of writing. But I also think the differences involve the different ways of how memory works.

JWH

Can AI Read Minds?

by James Wallace Harris, 5/24/23

We’ve known for a long time that we can’t trust what we read. And now with AI-generated art, we must suspect every photo and video we see. I’m even suspicious when the news comes from a scientific journal.

Recent reports claim that AI programs using fMRI (functional magnetic resonance imaging) brain scans can generate text, images, and even videos. In other words, it appears that AI can read our minds. See the paper: “Cinematic Mindscapes: High-quality Video Reconstruction from Brain Activity” by Zijiao Chen, Jiaxin Qing, and Juan Helen Zhou. Even though this is a scientific paper it is readable if you aren’t a scientist, however it is dense and leaves out a lot I would like to know. I hope they make an episode of NOVA on PBS about it because I would love to see how this experiment was done step-by-step.

The press is claiming this means AI technology can read our minds. I’m thinking this is a magic trick and I want to figure out how it works.

I find this tremendously hard to believe that AI will be able to read our minds. fMRI scans measure blood flow or blood-oxygen-level dependent (BOLD) readings. They look like this:

I just don’t believe there’s enough information in such images to generate an image of what’s in a person’s mind’s eye. However, I thought about how it might be possible to get the results described in this experiment.

These scientists showed photos or videos to people while being scanned by an fMRI machine. They used AI to analyze the scans and then another AI program, Stable Diffusion, to generate pictures and videos that visually interpret what the first AI program said about the scans.

An analogy is sound recordings to vinyl records. Sound is captured with a microphone and is converted to grooves on a record. Then a record player plays the record and the groves are converted back into sound. If you look at this photo of the grooves in a record it’s hard to imagine it could reproduce The Beatles or Beethoven but they can.

Should we really believe that we can invent a machine that decodes the coding in our brains? For such a machine to work we have to assume our brains are like analog recording devices and not digital. That seems logical. Okay, I can buy that. What I can’t believe is there’s enough information in the fMRI scans to decode. They seem too crude. When we look at a smiling dog and create an image of it in our head, that mental image can have far more details than the patterns we see in the blood flow in our brains. I don’t think they are like the grooves in a record.

What scientists do is show people a picture of a cat, then take a snapshot of the brain’s blood flow pattern. Then they claim their AI program can look at that pattern and know that it’s a cat. Where’s the Rosetta Stone?

I can believe researchers could take ten objects and create 10 fMRI scans and tell a computer what the subject was looking at when scanned. And then test an AI program by taking new scans and having it match up the new scans against the old. But in the above paper, they are claiming they took a series of scans (every 2 seconds) that could generate a video of the movement of a cat’s head. In other words, the fMRI scans give enough information for the subject to have different blood flow patterns depending on the position of the cat’s head.

Even if they perfected this technique it will depend on building a dictionary of fMRI patterns and meanings for every individual. I seriously doubt there’s an engineering standard that works on all humans. Everyone’s brain is different. For an AI to read your brain your brain would have had to be scanned thoroughly and documented. If we showed the same cat photo to a million people, would the fMRI patterns even look close even in a subset of those people? My guess is it will look different for everyone.

The scientists who wrote “Cinematic Mindscapes” used a library of publically available datasets, that included fMRI scans that included 600,000 segments. No matter how much I reread their methodology, I couldn’t understand what they actually did. Since I’ve worked with Midjourney I know how hard it is to get an AI art program to generate a specific image, I’m not sure how they fed Stable Diffusion to get it to generate imagines. If you look at all the examples, it reminds me of how I kept trying to get Midjourney to generate what I visualized in my mind but it always coming up with something different, but maybe close.

My conclusion is AI can’t read minds. But AI can tell the difference between different brain scans which were created with known prompts and get an AI program to generate something similar. But I never could figure out what the prompts were for the Stable Diffusion.

AI programs train on datasets. If the dataset is built from stimulus photos and fMRI scans, how is that any different than training them on photos and text labels. For example. Photo of a smiling dog with the text “smiling dog” and photo of a smiling dog with fMRI scan. If you gave Stable Diffusion the text “smiling dog” it would generate a picture of what it’s learned to be as a smiling dog from thousands of pictures of dogs. Giving the digital data from an fMRI trained the same way Stable Diffusion would produce images of a smiling dog but one it’s learned from training, not the one in the subject’s mind.

Previous fMRI research has shown they can link BOLD patterns with words.

This is not mind reading in the way we normally imagine mind reading. Isn’t it akin to sign language by having the subject’s blood flow patterns make the signs? Real mind reading would be seeing the same smiling dog as the subject saw, and not agreeing on a sign for a smiling dog.

What’s happening here is we’re learning that blood flow in the brain makes patterns and to a degree, we can label them with words or pictures or digital data, but it’s still language translation.

If I say smiling dog to you right now, you can picture a smiling dog, but it won’t be the smiling dog I’m picturing. AI art is based on generalizations about language definitions and translations.

Do we say it’s mind reading if I pictured a smiling dog in my head and then prompted Midjourney with the test: “smiling dog” and it produce a picture of what it thinks is a smiling dog? Sure, my mental image might be of a black pug and Midjourney might produce a black border collie. Close enough. Impressive even. But isn’t it what we do every day with language? We’ve all built a library of images that go with words and concepts, but they aren’t the same as every other person’s library. Language only gets us approximations.

Real mind reading would be if an AI saw exactly what was in my mind.

JWH

“Created by Humans” vs. “Created by AI”

by James Wallace Harris, 4/22/23

The first video I watched on YouTube this morning was “How to create a children’s storybook using ChatGPT and Midjourney AI for Amazon KDP Start to Finish.” eLibrary1 explains how she creates children’s books using AI tools.

It’s actually quite fascinating. She gets ChatGPT to suggest a series of ideas and then asks ChatGPT to write up 500-word versions of the ideas she likes. Then she tests those stories against an AI checker to show how they can be easily detected as AI-created. Then she runs the stories through another program that rewrites her stories. After that, she checks again and shows how the AI detector shows they are now human-written. Then she runs them through a plagiarizer detector to make sure they won’t be rejected for that reason. After she’s sure she’s got something good to work with she submits the stories scene by scene to Midjourney to have it create the artwork.

As I watched this video I thought about how so many people are concerned with seeing “Made in America” tags on the products they buy. I wondered if people in the future will look for “Made by Humans” or “Created by Humans” tags?

My initial reaction was I wouldn’t want to read a book that eLibrary1 created. I would feel cheated. I expect art and fiction to be produced by artists that suffered for their art. But then I thought, what if the story and pictures were better than what people produce? I’m already seeing artwork produced by AI that blows me away.

Just scroll down for a while in Midjourney’s Community Showcase.

Or look at Latest Works at Art AI Gallery.

The range of what’s possible is tremendous. But then, it’s all been inspired by art created by humans. Is AI art actually creative work? Well, humans don’t create artwork out of nothing either. They have a lifetime of being inspired by other artists.

Let’s ignore this philosophical question for a moment. Let’s go back to the old idea of people “liking what they see” as a test of quality. I love visiting art galleries. I love looking at graphic art in magazines. I love looking at art books. I often buy books for their covers. And I have collected thousands of science fiction magazines, both in physical format and digital scans (but mostly digital). The reason I love them so much is because of their covers.

I’ve got to admit that AI-generated art presses the same exact buttons as art produced by humans. I have not read fiction written by AI writers, but what if I love their stories as much as I like AI art? To be honest, I believe I have a stronger psychological desire for fiction to be human-generated. What happens to that feeling if I read an AI-written novel that I like more than all my favorite human-written novels?

What I’m feeling right now is the desire to tune out the AI world. To retreat into the past, and savor the art and fiction created before the 21st century. That I want to become a modern Luddite that rejects AI machinery. But what will I be missing out on?

What if machines can take our imaginations further? Isn’t that why I’ve been a lifelong science fiction reader? Isn’t that why I took psychedelic drugs in the 1960s? Isn’t that why we admire the greatest of human thinkers?

Maybe I want to run away because I’m old and tired. One of the main enjoyments of getting old and putting up with the pains of aging is seeing how events unfold. So, why turn away now?

JWH

Are You in Future Shock Yet?

by James Wallace Harris, 3/24/23

Back in 1970, a nonfiction bestseller, Future Shock by Alvin Toffler, was widely talked about but it’s little remembered today. With atomic bombs in the 1940s, ICBMs, and computers in the 1950s, manned space flight and landing on the Moon in the 1960s, LSD, hippies, the Age of Aquarius, civil rights, gay rights, feminism, as well as a yearly unfolding of new technologies, it was easy to understand why Toffler suggested the pace of change could lead society into a collective state of shock.

But if we could time travel back to 1970 we could quote Al Jolson to Alvin, “You ain’t seen nothing yet.” Couldn’t we? Toffler never came close to imagining the years we’ve been living since 1970. And his book was forgotten, but I think his ideas are still valid.

Future shock finally hit me yesterday when I watched the video “‘Sparks of AGI’ – Bombshell GPT-4 Paper: Fully Read w/ 15 Revelations.”

I’ve been playing around with ChatGPT for weeks, and I knew GPT 4 was coming, but I was surprised as hell when it hit so soon. Over the past few weeks, people have been writing and reporting about using ChatGPT and the general consensus was it was impressive but because it made so many mistakes we shouldn’t get too worried. GPT 4 makes far fewer mistakes. Far fewer. But it’s fixing them fast.

Watch the video! Read the report. I’ve been waiting years for general artificial intelligence, and this isn’t it. But it’s so damn close that it doesn’t matter. Starting back in the 1950s when computer scientists first started talking about AI, they kept trying to set the bar that would prove a computer could be called intelligent. An early example was playing chess. But when a computer was built to perform one of these measures and passed, computer scientists would say that test really wasn’t a true measure of intelligence and we should try X instead. Well, we’re running out of things to equate with human-level intelligence.

Most people have expected a human-level intelligent computer would be sentient. I think GPT 4 shows that’s not true. I’m not sure anymore if any feat of human intelligence needs to be tied to sentience. All the fantastic skills we admire about our species are turning out to be skills a computer can perform.

We thought we’d trump computers with our mental skills, but it might be our physical skills that are harder to give machines. Like I said, watch the video. Computers can now write books, compose music, do mathematics, paint pictures, create movies, analyze medical mysteries, understand legal issues, ponder ethics, etc. Right now AI computers configured as robots have difficulty playing basketball, knitting, changing a diaper, and things like that. But that could change just as fast as things have been changing with cognitive creativity.

I believe most people imagined a world of intelligent machines being robots that look like us — like those we see in the movies. Well, the future never unfolds like we imagine. GPT and its kind are invisible to us, but we can easily interact with them. I don’t think science or science fiction imagined how easily that interaction would be, or how quickly it would be rolled out. Because it’s here now.

I don’t think we ever imagined how distributed AI would become. Almost anything you can think of doing, you can aid your efforts right now by getting advice and help from a GPT-type AI. Sure, there are still problems, but watch the video. There are far fewer problems than last week, and who knows how many fewer there will be next week.

Future shock is all about adapting to change. If you can’t handle the change, you’re suffering from future shock. And that’s the thing about the 1970 Toffler book. Most of us kept adapting to change no matter how fast it came. But AI is going to bring about a big change. Much bigger than the internet or computers or even the industrial revolution.

You can easily tell the difference between the people who will handle this change and those who can’t. Those that do are already using AI. They embraced it immediately. We’ve been embracing pieces of AI for years. A spelling and grammar checker is a form of AI. But this new stuff is a quantum leap over everything that’s come before. Put it to use or get left behind.

Do you know about cargo cults? Whenever an advanced society met a primitive society it doesn’t go well for primitive societies. The old cultural divide was between the educated and the uneducated. Expect new divisions. And remember Clarke’s Third Law: “Any sufficiently advanced technology is indistinguishable from magic.” For many people, AI will be magic.

Right now AI can help scholars write books. Soon AI will be able to write better scholarly books than scholars. Will that mean academics giving up writing papers and books? I don’t think so. AIs, as of now, have no desires. Humans will guide them. In the near future, humans will ride jockey on AI horses.

A couple weeks ago Clarkesworld Magazine, a science fiction magazine, shut down submissions because they were being flooded with Chat-GPT-developed stories. The problem was the level of submissions was overwhelming them, but the initial shock I think for most people would be the stories would be crap. That the submitted science fiction wouldn’t be creative in a human sense. That those AI-written stories would be a cheat. But what if humans using GPT start producing science fiction stories that are better than stories only written by humans?

Are you starting to get why I’m asking you if you feel future shock yet? Be sure and watch the video.

Finally, isn’t AI just another example of human intelligence? Maybe when AIs create artificial AIs, we can call them intelligent.

JWH