A Painful Challenge to My Ego

James Wallace Harris, 2/26/24

I’m hitting a new cognitive barrier that stops me cold. It’s making me doubt myself. I’ve been watching several YouTubers report on the latest news in artificial intelligence and I’ve been amazed by their ability to understand and summarize a great amount complex information. I want to understand the same information and summarize it too, but I can’t. Struggling to do so wounds my ego.

This experience is forcing me to contemplate my decaying cognitive abilities. I had a similar shock ten years ago when I retired. I was sixty-two and training a woman in her twenties to take over my job. She blew my mind by absorbing the information I gave her as fast as I could tell her. One reason I chose to retire early is because I couldn’t learn the new programming language, framework, and IDE that our IT department was making standard. That young woman was learning my servers and old programs in a language she didn’t know at a speed that shocked and awed me. My ego figured something was up, even then, when it was obvious this young woman could think several times faster than I could. I realized that’s what getting old meant.

I feel like a little aquarium fish that keeps bumping into an invisible barrier. My Zen realization is I’ve been put in a smaller tank. I need to map the territory and learn how to live with my new limitations. Of course, my ego still wants to maximize what I can do within those limits.

I remember as my mother got older, my sister and I had to decide when and where she could drive because she wouldn’t limit herself for her own safety. Eventually, my sister and I had to take her car away. I’m starting to realize that I can’t write about certain ideas because I can’t comprehend them. Will I always have the self-awareness to know what I can comprehend and what I can’t?

This makes me think of Joe Biden and Donald Trump. Both are older than I am. Does Biden realize what he’s forgotten? Does Trump even understand he can’t possibly know everything he thinks he knows? Neither guy wants to give up because of their egos.

So, what am I not seeing about myself? I’m reminded of Charlie Gordon in the story “Flowers for Algernon,” when Charlie was in his intellectual decline phase.

Are there tools we could use to measure our own decline? Well, that’s a topic for another essay, but I believe blogging might be one such tool.

JWH

ChatGPT Isn’t an Artificial Intelligence (AI) But an Artificial Unconsciousness (AU)

by James Wallace Harris, 2/12/24

This essay is for anyone who wants to understand themselves and how creativity works. What I’m about to say will make more sense if you’ve played with ChatGPT or have some understanding of recent AI programs in the news. Those programs appear to be amazingly creative by answering ordinary questions, passing tests that lawyers, mathematicians, and doctors take, generating poems and pictures, and even creating music and videos. They often appear to have human intelligence even though they are criticized for making stupid mistakes — but then so do humans.

We generally think of our unconscious minds as mental processes occurring automatically below the surface of our conscious minds, out of our control. We believe our unconscious minds are neural functions that influence thought, feelings, desires, skills, perceptions, and reactions. Personally, I assume feelings, emotions, and desires come from an even deeper place and are based on hormones and are unrelated to unconscious intelligence.

It occurred to me that ChatGPT and other large language models are analogs for the unconscious mind, and this made me observe my own thoughts more closely. I don’t believe in free will. I don’t even believe I’m writing this essay. The keyword here is “I” and how we use it. If we use “I” to refer to our whole mind and body, then I’m writing the essay. But if we think of the “I” as the observer of reality that comes into being when I’m awake, then probably not. You might object to this strongly because our sense of I-ness feels obviously in full control of the whole shebang.

But what if our unconscious minds are like AI programs, what would that mean? Those AI programs train on billions of pieces of data, taking a long time to learn. But then, don’t children do something similar? The AI programs work by prompting it with a question. If you play a game of Wordle, aren’t you prompting your unconscious mind? Could you write a step-by-step flow chart of how you solve a Wordle game consciously? Don’t your hunches just pop into your mind?

If our unconscious minds are like ChatGPT, then we can improve them by feeding in more data and giving it better prompts. Isn’t that what we do when studying and taking tests? Computer scientists are working hard to improve their AI models. They give their models more data and refine their prompts. If they want their model to write computer programs, they train their models in more computer languages and programs. If we want to become an architect, we train our minds with data related to architecture. (I must wonder about my unconscious mind; it’s been trained on decades of reading science fiction.)

This will also explain why you can’t easily change another person’s mind. Training takes a long time. The unconscious mind doesn’t respond to immediate logic. If you’ve trained your mental model all your life on The Bible or investing money, it won’t be influenced immediately by new facts regarding science or economics.

We live by the illusion that we’re teaching the “I” function of our mind, the observer, the watcher, but what we’re really doing is training our unconscious mind like computer scientists train their AI models. We might even fool ourselves that free will exists because we believe the “I” is choosing the data and prompts. But is that true? What if the unconscious mind tells the “I” what to study? What to create? If the observer exists separate from intelligence, then we don’t have free will. But how could ChatGPT have free will? Humans created it, deciding on the training data, and the prompts. Are our unconscious minds creating artificial unconscious minds? Maybe nothing has free will, and everything is interrelated.

If you’ve ever practiced meditation, you’ll know that you can watch your thoughts. Proof that the observer is separate from thinking. Twice in my life I’ve lost the ability to use words and language, once in 1970 because of a large dose of LSD, and about a decade ago with a TIA. In both events I observed the world around me without words coming to mind. I just looked at things and acted on conditioned reflexes. That let me experience a state of consciousness with low intelligence, one like animals know. I now wonder if I was cut off from my unconscious mind. And if that’s true, it implies language and thoughts come from the unconscious minds, and not from what we call conscious awareness. That the observer and intelligence are separate functions of the mind.

We can get ChatGPT to write an essay for us, and it has no awareness of its actions. We use our senses to create a virtual reality in our head, an umwelt, which gives us a sensation that we’re observing reality and interacting with it, but we’re really interacting with a model of reality. I call this function that observes our model of reality the watcher. But what if our thoughts are separate from this viewer, this watcher?

If we think of large language models as analogs for the unconscious mind, then everything we do in daily life is training for our mental model. Then does the conscious mind stand in for the prompt creator? I’m on the fence about this. Sometimes the unconscious mind generates its own prompts, sometimes prompts are pushed onto us from everyday life, but maybe, just maybe, we occasionally prompt our unconscious mind consciously. Would that be free will?

When I write an essay, I have a brain function that works like ChatGPT. It generates text but as it comes into my conscious mind it feels like I, the viewer, created it. That’s an illusion. The watcher takes credit.

Over the past year or two I’ve noticed that my dreams are acquiring the elements of fiction writing. I think that’s because I’ve been working harder at understanding fiction. Like ChatGPT, we’re always training our mental model.

Last night I dreamed a murder mystery involving killing someone with nitrogen. For years I’ve heard about people committing suicide with nitrogen, and then a few weeks ago Alabama executed a man using nitrogen. My wife and I have been watching two episodes of Perry Mason each evening before bed. I think the ChatGPT feature in my brain took all that in and generated that dream.

I have a condition called aphantasia, that means I don’t consciously create mental pictures. However, I do create imagery in dreams, and sometimes when I’m drowsy, imagery, and even dream fragments float into my conscious mind. It’s like my unconscious mind is leaking into the conscious mind. I know these images and thoughts aren’t part of conscious thinking. But the watcher can observe them.

If you’ve ever played with the AI program Midjourney that creates artistic images, you know that it often creates weirdness, like three-armed people, or hands with seven fingers. Dreams often have such mistakes.

When AIs produce fictional results, the computer scientists say the AI is hallucinating. If you pay close attention to people, you’ll know we all live by many delusions. I believe programs like ChatGPT mimic humans in more ways than we expected.

I don’t think science is anywhere close to explaining how the brain produces the observer, that sense of I-ness, but science is getting much closer to understanding how intelligence works. Computer scientists say they aren’t there yet, and plan for AGI, or artificial general intelligence. They keep moving the goal. What they really want are computers much smarter than humans that don’t make mistakes, which don’t hallucinate. I don’t know if computer scientists care if computers have awareness like our internal watchers, that sense of I-ness. Sentient computers are something different.

I think what they’ve discovered is intelligence isn’t conscious. If you talk to famous artists, writers, and musicians, they will often talk about their muses. They’ve known for centuries their creativity isn’t conscious.

All this makes me think about changing how I train my model. What if I stopped reading science fiction and only read nonfiction? What if I cut out all forms of fiction including television and movies? Would it change my personality? Would I choose different prompts seeking different forms of output? If I do, wouldn’t that be my unconscious mind prompting me to do so?

This makes me ask: If I watched only Fox News would I become a Trump supporter? How long would it take? Back in the Sixties there was a catch phrase, “You are what you eat.” Then I learned a computer acronym, GIGO — “Garbage In, Garbage Out.” Could we say free will exists if we control the data, we use train our unconscious minds?

JWH

I’m Too Dumb to Use Artificial Intelligence

by James Wallace Harris, 1/19/24

I haven’t done any programming since I retired. Before I retired, I assumed I’d do programming for fun, but I never found a reason to write a program over the last ten years. Then, this week, I saw a YouTube video about PrivateGPT that would allow me to train an AI to read my own documents (.pdf, docx, txt, epub). At the time I was researching Philip K. Dick, and I was overwhelmed by the amount of content I was finding about the writer. So, this light bulb went off in my head. Why not use AI to help me read and research Philip K. Dick. I really wanted to feed the six volumes of collected letters of PKD to the AI so I could query it.

PrivateGPT is free. All I had to do was install it. I’ve spent days trying to install the dang program. The common wisdom is Python is the easiest programming language to learn right now. That might be true. But installing a Python program with all its libraries and dependencies is a nightmare. What I quickly learned is distributing and installing a Python program is an endless dumpster fire. I have Anaconda, Python 3.11, Visual Studio Code, Git, Docker, Pip, installed on three computers, Windows, Mac, and Linux, and I’ve yet to get anything to work consistently. I haven’t even gotten to part where I’d need the Poetry tool. I can run Python code under plain Python and Anaconda and set up virtual environments on each. But I can’t get VS Code to recognize those virtual environments no matter what I do.

Now I don’t need VS Code at all, but it’s so nice and universal that I felt I must get it going. VS Code is so cool looking, and it feels like it could control a jumbo jet. I’ve spent hours trying to get it working with the custom environments Conda created. There’s just some conceptual configuration I’m missing. I’ve tried it on Windows, Mac, and Linux just in case it’s a messed-up configuration on a particular machine. But they all fail in the same way.

I decided I needed to give up on using VS Code with Conda commands. If I continue, I’ll just use the Anaconda prompt terminal on Windows, or the terminal on Mac or Linux.

However, after days of banging my head against a wall so I could use AI might have taught me something. Whenever I think of creating a program, I think of something that will help me organize my thoughts and research what I read. I might end up spending a year just to get PrivateGPT trained on reading and understanding articles and dissertations on Philip K. Dick. Maybe it would be easier if I just read and processed the documents myself. I thought an AI would save me time, but it requires learning a whole new specialization. And if I did that, I might just end up becoming a programmer again, rather than an essayist.

This got me thinking about a minimalistic programming paradigm. This was partly inspired by seeing the video “The Unreasonable Effectiveness of Plain Text.”

Basically, this video advocates doing everything in plain text, and using the Markdown format. That’s the default format of Obsidian, a note taking program.

It might save me lot of time if I just read the six volumes of PKD’s letters and take notes over trying to teach a computer how to read those volumes and understand my queries. I’m not even sure I could train PrivateGPT to become a literary researcher.

Visual Studio Code is loved because it does so much for the programmer. It’s full of artificial intelligence. And more AI is being added every day. Plus, it’s supposed to work with other brilliant programming tools. But using those tools and getting them to cooperate with each other is befuddling my brain.

This frustrating week has shown me I’m not smart enough to use smart tools. This reminds me of a classic science fiction short story by Poul Anderson, “The Man Who Came Early.” It’s about a 20th century man who thrown back in time to the Vikings, around the year 1000 AD. He thinks he will be useful to the people of that time because he can invent all kinds of marvels. What he learns is he doesn’t even know how to make the tools, in which to make the tools, that made the tools he was used to in the 20th century.

I can use a basic text editor and compiler, but my aging brain just can’t handle more advance modern programming tools, especially if they’re full of AI.

I need to solve my data processing needs with basic tools. But I also realized something else. My real goal was to process information about Philip K. Dick and write a summarizing essay. Even if I took a year and wrote an AI essay writing program, it would only teach me a whole lot about programming, and not about Philip K. Dick or writing essays.

What I really want is for me to be more intelligent, not my computer.

JWH

Reading Comprehension: Books vs. Audiobooks

by James Wallace Harris, 1/3/24

At 72, I’m still learning how to read.

I recently finished the audiobook of The Simulacra by Philip K. Dick and started to write a review for my science fiction blog. That’s when I realized I needed to read the book with my eyes before I could write a proper review. The Simulacra was a complex novel involving several plot threads and dozens of named characters. (Read the plot summary at Wikipedia. Get the book at Amazon.)

From my audiobook experience I found the book compelling, fun, and I was always anxious to get back to listening to the story. I was never confused by what was going on, but when I tried to summarize the novel for my review, I discovered I couldn’t recall all the details I needed to make a coherent description of the story. There were just too many science-fictional concepts. Nor could I describe all the plot threads without researching them.

I won’t describe the book in detail, I’ll do that in my review, but for now, The Simulacra is about a post-apocalyptic world where China attacked America with atomic missiles in 1980, and the U.S. government and Germany combined to form a totalitarian regime called The United States of Europe and America (USEA). It appears to be run by a captivating 23-year-old first lady named Nicole Thibodeaux. However, she has been married to five presidents and always remains young. Since this book was written in the summer of 1963, I assume Dick was inspired by Jackie Kennedy because Nicole spends most of her time charming people, decorating the White House and gardens, and putting on nightly cultural events. But Nicole is also ruthless enough to have people summarily executed, evidently wielding unlimited power. She has access to time travel no less, and one subplot involves her negotiating with Nazis to change the course of WWII. Other subplots involve an insane psychic pianist Nicole wants to play at the White House, the outlawing of psychiatry pushed by the pharmaceutical industry, what happens to the last legal psychiatrist, a pair of ordinary guys who have a jug band that plays classical music who want to perform at the White House, a trio of sound engineers who are trying to chase down the psychic pianist to record, and a small company that hopes to get the contract to construct the next president. This long paragraph barely scratches the surface of the whole novel.

My failure of completely understanding the novel from listening to the audiobook was partly due to aging memory and partly due to the complexity of Dick’s prose. I could have hashed out several thousand words describing what I remembered, although it would have been a bundle of vague impressions. Summarizing what PKD was trying to do was evasive from just listening to the audiobook.

Audiobooks are bad for remembering exact details, which I knew, but was painfully revealed when I tried to read the novel and take notes. I called up The Simulacra on my PC in the Kindle app on the left side of the screen, and launched Obsidian, a note taking program on the left side of the screen. I started reading The Simulacra again, but with my eyes. After two days, getting to the 29% read position on the Kindle edition, I had twenty-eight names, twenty-six plot points, several lists of other details, and several quotes in my notes. I figure there are three to five main plot threads, each involving three or more characters.

More than that, I realized Philip K. Dick had riffed on hundreds of ideas. As I read them, I remembered them, but I realized that while listening, I had not put most of them within the context of the story. It wasn’t until my second reading that I saw all these hundreds of creative speculations as being part of one jigsaw puzzle picture. And I’m not talking about the characters and plots. I’m talking about worldbuilding.

Rereading with my eyes allowed me to stop and ponder. Rereading allowed me to remember the bigger picture. However, listening to the audiobook let me enjoy the story more. The narrator, Peter Berkrot, did voices for each of the characters, and acted out their personalities. Listening to the novel, it felt like I was listening to an old-time radio drama where many actors performed a story.

At one point I got too tired to read and went to bed. But before I fell asleep, I listened to the part I had just read. Berkrot expressed emotions I had not picked up while reading with my eyes, but recalling the scenes made me realize that Dick had put them there. In other words, Berkrot had found aspects of the text I missed and was pointing them out with sound.

Over the ten years since I’ve retired, I’ve been learning the value of rereading books. In fact, I now feel reading a story just once is unfair to the author. It takes two or more readings to see the author’s vision. Reading a work of fiction just once provides one layer of understanding. It’s when we see multiple layers within a work that we start to truly understand it.

Switching back and forth between reading with my eyes and reading with my ears reveals both methods have their advantages. If I read once with one sense organ and reread with another, the two combine to create reading synergy.

For most of my life, I’ve always been concerned with reading more books, but the wisdom I’m gaining from getting old is showing me that both speed reading and reading lots of books is a distraction from deep reading.

Right now, I’d recommend:

  1. Listen to an audiobook for the first reading to get the big picture.
  2. Reread with a physical book or ebook to get the details. Read slowly and stop often to ponder.
  3. Write a review to make deeper sense of a book. Putting things into words pushes us to make sense of things.
  4. Read reviews and scholarly articles to get other perspectives.
  5. Reread the book again to bring it all together.

This is what I’m working on with The Simulacra by Philip K. Dick. It’s not considered one of Dick’s better works, but I’m trying to discover if there is more to the novel than its current reputation.

JWH

Is Ethical Capitalism Even Possible?

by James Wallace Harris, 10/20/23

This month, several of my friends have separately expressed doubt about the future. I don’t hold much hope either. Our current world civilization seems to be falling apart. Capitalism is consuming the planet, but capitalism is the only economic system that creates enough jobs to end poverty. The only alternative to free market capitalism I can imagine is if we adapt capitalism to an ethical system. So, I’ve been keeping my eye open for signs of emerging ethical capitalism.

Here’s one: “The Workers Behind AI Rarely See Its Rewards. This Indian Startup Wants to Fix That” from Time Magazine (8/14/23). The article describes how AI startups need vast amounts of sample data from other languages for their large language models. In India, many data companies are exploiting poor people for their unique language data and keeping the profit, but one company, Karya, is giving the poor people they employ a larger share of the profits. This helps lift them out of poverty.

Capitalism has two dangerous side effects. It destroys the environment and creates inequality. For capitalism to become ethical it will need to be environmentally friendly, or at least neutral, and it will need to be more equitable. If we want to have hope for the future, we need to see more signs of that happening.

Right now, profits drive capitalism. Profits are used to expand a corporation’s ability to grow profits, and to make management and investors rich. Labor and environmental controls are seen as expenses that reduce profits. For a corporation to be ethical it will have to have a neutral or positive impact on the environment, and it will need to share more of its profits with labor.

Since the pandemic hourly wages have been going up, and so has inflation. If capitalism becomes more ethical, costs for environmentalism and labor will go up, thus ethical capitalism will be inflationary. Some people have gotten extraordinarily rich by making things cheap, but it’s also shifted labor and environmental costs away from corporations onto the government and the public. The price at the store does not reflect the actual cost of making what you buy. You pay the difference in taxes.

For ethical capitalism to come about things will need to be sold for what they cost to make. That will involve getting rid of governmental and corporate corruption. It will involve political change. And it will be inflationary until the new system stabilizes.

My guess is ethical capitalism will never come about. If I were writing a science fiction novel that envisioned life in the 2060s it would be very bleak. Life in America will be like what we see in failed states today. Back in the 1960s we often heard of the domino theory regarding communism. Failed states are falling like dominoes now. Environmental catastrophes, political unrest, dwindling natural resources, and viral inequality will homogenize our current world civilization. Either we work together to make it something good, or we’ll all just tear everything apart.

Civilization is something we should all shape by conscious design and not a byproduct of capitalistic greed.

We have all the knowledge we need to fix our problems, but we lack the self-control to apply it. I have some friends who think I’m a dope for even holding out a smidgen of hope. Maybe my belief that we could theoretically solve our problems is Pollyannish.

I have two theories that support that sliver of hope. One theory says humans have always been the same psychological for two hundred thousand years. In other words, our habits and passions don’t change. The other theory says we create cultures, languages, technologies, systems that can organize us into diverse kinds of social systems that control our behavior.

We could choose better systems to manage ourselves. However, we always vote by greed and self-interest. We need to vote for preserving all.

In other words, we don’t change on the inside, but we do change how we live on the outside. My sliver of hope is we’ll make laws and invent technology that will create a society based on ethical capitalism and we’ll adapt our personalities to it.

I know that’s a long shot, but it’s the only one I have.

I’m working to develop a new habit of reading one substantial article a day and breaking my bad habit of consuming dozens of useless tidbits of data that catch my eye as clickbait. In other words, one healthy meal of wisdom versus snacking all day on junk ideas. Wisdom doesn’t come packaged like cookies or chips.

JWH