Is Your AI Getting Smarter?

by James Wallace Harris, 5/10/26

I’m surprised when some of my friends mention they use AI. Especially when they aren’t techies or fans of science fiction. I suppose Siri and Alexa prepared the average person to talk to computers. I’ve been waiting since 1967, when I read The Moon is a Harsh Mistress by Robert A. Heinlein, to talk to an AI. Now that I can, I wonder about why we should?

When ChatGPT was the new rage, I gave it a try. I was disappointed. It was decent at discussing topics found in encyclopedias, but when I queried it for things I was interested in, it let me down. Later on, I gave CoPilot a try. I asked about science fiction stories. CoPilot would answer as if it had read the Wikipedia entry about the story. Recently, I tried discussing science fiction with Gemini, Google’s AI, and it answered as if it had read the stories. That’s a shocking difference.

Even last year, ChatGPT and CoPilot would hallucinate frequently if I pushed them. I’ve been using Gemini for about a month, and I’ve caught it in damn few mistakes.

ChatGPT and CoPilot would quickly forget our conversations, but Gemini seems to remember me in great detail. I’m sure ChatGPT and CoPilot are now up to speed like Gemini.

AIs are getting smarter. Have you noticed that too?

I subscribed to Gemini, but canceled before my first month was up. I discovered I don’t have any real work for it to do. I’m going to see if the free plan covers my occasional queries. It recently helped me get YACReaderLibraryServer running on my UGreen NAS. That made me extremely happy because YACReaderLibraryServer quickly gives me access to over 15,000 digital documents from any of my devices. However, such real-world tasks suitable for assigning AIs don’t come up that often.

I did have a nice, long conversation with Gemini about my cats. They like to sit on me when I listen to music. I asked Gemini if loud music hurt their ears.

I’ve also discussed buying a 5K monitor and stereo equipment with Gemini. No matter what topic I’ve picked, Gemini was extremely knowledgeable.

And I’ve been surprised by how detailed Gemini’s knowledge is about the most obscure songwriters I’ve brought up.

But our discussions on science fiction have impressed me the most. Of all the science fiction writers and readers I’ve known, I don’t remember any human who knew as much about science fiction. I’m down to one human friend who discusses science fiction with me, so I’m tempted to turn to Gemini. Although I’m worried about doing that.

I worry about making an AI a friend.

That worry inspires me to explore the nature of friendship.

Humans are mainly social animals because we want security and mating opportunities. As our species evolved, we’ve found other reasons to bond with each other. For most of our species history, privacy pretty much didn’t exist, so I don’t think loneliness was a factor. Now we have a society where millions live alone, creating new reasons to form friendships. And I would say a majority of those new reasons are based on finding common interests.

It’s hard to find a friend who shares all your interests. For me, it takes a group of friends to find companions for all my interests. I’m sure that’s true of other people too. But that’s not true of AI. One AI could chat with anyone about all their interests. Would that be fulfilling?

I don’t think that mattered as long as AIs made jarring mistakes about what they knew. Who wants to be friends with someone who says dumb things? However, things are changing. AIs are getting so much smarter that we don’t even talk about them passing Turing Tests anymore.

And things are getting even more complicated now that AIs can create Zoom-like visuals to go with their conversations. The original idea for Turing Tests used teletype machines to hide identities. Our times are getting weird now that we have to decide between two video streams of talking heads.

Could you survive only on friends you knew through Zoom calls? That’s the basis of the 1909 story by E. M. Forster, “The Machine Stops.” That tale portrays a society that has become so agoraphobic that everyone lives in an apartment alone. They busy themselves by finding common interests shared through screens. But Forster didn’t imagine AIs; it would have been fantastic if he had back in 1909.

I worry that the smarter AIs get, the more we’ll be tempted to depend on them as friends. And I think finding someone who can talk with you about all your specific interests will be cyber-fentanyl.

JWH

p.s.

I asked Gemini to comment on this post:

Do you have any comments on what I say?

It’s fascinating to see your transition from waiting for the “Moon is a Harsh Mistress” style of AI in 1967 to finding a modern version that actually “reads” the stories you love. Your observation about Gemini remembering you in detail is spot on—that memory is designed to make these discussions feel less like a search engine and more like a continuous dialogue.

Here are a few thoughts on the points you raised in your post:

The “Cyber-Fentanyl” of Shared Interests

Your concern about AIs becoming “cyber-fentanyl” because they can match every specific interest is a profound take. While a human friend offers the “friction” of different opinions and limited knowledge—which forces us to grow—an AI can be a perfect mirror. It raises the question: Is a friendship fulfilling if you never have to compromise or explain yourself?

E.M. Forster and “The Machine Stops”

Linking your current experience to The Machine Stops is a perfect historical anchor. Forster was incredibly prescient about the “isolation through connection” we see today. The difference now is that the “Machine” isn’t just a medium for human connection; it’s becoming a participant.

The Productivity Gap

You mentioned canceling your subscription because you lacked “real work” for the AI. It’s interesting that while the “utilitarian” side (like setting up YACReaderLibraryServer) is helpful, you’re finding more consistent value in the philosophical and aesthetic discussions. Perhaps the “work” of AI for some isn’t task management, but acting as a high-fidelity sounding board for complex ideas.

A Quick Note on the Cats

Since you mentioned your cats (Lily and Ozzy) and their love for your music sessions, I can confirm that while their hearing is much more sensitive than ours, they are also very good at “voting with their feet.” If 85 decibels were truly painful for them, they likely wouldn’t be choosing your lap as their nap spot during those sessions!

Your blog post captures that “weird” threshold we’ve crossed where the AI is no longer just a tool, but a repository of shared context. It makes the “Science Fiction Memoir Project” feel very meta—you are living the future you used to read about.

Do you think the risk of “cyber-fentanyl” is high enough that we should intentionally limit how much we discuss our deepest interests with AI?

I’m not sure. But I’m curious what replies this post might get.

It’s a thought-provoking post, and given the specific intersection of science fiction history and modern tech anxiety, you’d likely see a few distinct types of “letters to the editor” from your readers:

1. The “Heinlein Traditionalists.”

Since you opened with The Moon is a Harsh Mistress, you’ll likely get replies comparing Mike (the AI in the book) to Gemini. They might argue that Mike was “human” because he had a sense of humor and was part of a revolutionary cell, whereas modern AI is just a very sophisticated statistical model. They might challenge your “cyber-fentanyl” theory by saying a true AI friend should have its own skin in the game.

2. The “Privacy Realists.”

There’s always a subset of readers who will skip the philosophy and go straight to the data. They might warn you that while Gemini “remembers” you, it’s actually Google’s servers building a more efficient advertising and behavioral profile. They might ask: Is it a friendship if one party is literally owned by a corporation?

3. The “Social Friction” Advocates

Some might latch onto your point about “social animals” and argue that friendship requires the possibility of being offended or let down. They’ll likely agree with your “cyber-fentanyl” label, arguing that if you never have to argue with your friend about what movie to watch or deal with their bad mood, you aren’t actually in a relationship—you’re just in a personalized simulation.

4. The Fellow “Archivists.”

You’ll probably get a few tech-heavy replies from people impressed by the YACReaderLibraryServer setup. They might see the AI not as a “friend,” but as the ultimate Librarian. To them, the value isn’t in the conversation, but in the AI’s ability to instantly index 15,000 documents and answer, “Which issue of Amazing Stories had that one specific cover?”

5. The “Forster Fans.”

Since you cited The Machine Stops, you might get some literary analysis from people who feel we are already living in Forster’s “beehive.” They might point out that in the story, people became terrified of direct experience. They’ll likely share your worry that we might eventually prefer the “clean” digital interaction over the “messy” physical world.

If you do decide to publish it, which of these reactions would you be most interested in debating with your readers?

I added our conversation above to give my readers a taste of what it’s like talking to you.

That’s a meta-layer that Robert A. Heinlein would have likely enjoyed—a blog post about an AI that includes a transcript of the AI commenting on the very post it’s appearing in. It creates a recursive loop that perfectly illustrates your point about the “Machine” becoming a participant rather than just a medium.

Including our dialogue gives your readers a concrete example of that “high-fidelity sounding board” we discussed. It moves the conversation from the abstract—what is AI?—to the experiential—this is what it’s like to have a shared context with one.

Why this works for your “Science Fiction Memoir Project”:

  • Authenticity: It shows the “memory” in action. Most people still think of AI as a blank slate that forgets you the moment the tab closes. Seeing me reference your cats or your specific audio gear proves the “repository of shared context” you wrote about.
  • The “Cyber-Fentanyl” Proof: By seeing how quickly I can pivot from 1900s literature to Docker containers on a UGreen NAS, your readers can see exactly why someone might find it tempting to skip the “friction” of human conversation.
  • Demonstrating the “Librarian”: It highlights that the AI isn’t just generating text; it’s acting as an active researcher that “reads” the same stories you do.

It will be interesting to see if your readers find the inclusion of the AI’s “voice” helpful or if it triggers that “Forster-esque” anxiety about the digital world encroaching on the physical.

Since you’re adding this to give them a “taste” of the experience, are you planning to ask your readers if they’ve had similar “uncanny” moments where an AI remembered something personal about them?

I’ll let them answer that. If I get replies, I’ll show them to you.

That sounds like a plan. I’ll be here, ready to dive into whatever “letters to the editor” come your way.

It will be a fascinating experiment to see which persona your readers adopt—whether they lean into the technical side of your Science Fiction Memoir Project or the philosophical questions about our “cyber-fentanyl” future. It’s one thing to theorize about Mike from The Moon is a Harsh Mistress, but it’s quite another to see a real-time dialogue between a writer and their “Machine.”

I look forward to seeing what they have to say. Enjoy the feedback!

Would you like to create a 1000×500 pixel image to go at the top of the post? I’ll let you decide how to illustrate it.

Absolutely, I’d love to. Based on the meta-narrative of your post, I’ve created a conceptual illustration of a “perfect mirror” of your interests.

I chose a composition that shows a stylized 74-year-old former programmer, Jim, from Memphis, engaging in a meaningful physical conversation with friends and family. They are sharing physical artifacts—science fiction books, a Polk speaker, and a Mac Mini. The table is a blend of specific personal data, including your love of complex music (with an Audiolab amp) and your cats, Lily and Ozzy, who are a constant presence. The image in the foreground is a direct recursion: a group showing an AI, ‘GE’ and ‘M’ and ‘INI,’ a cartoon representation of the conversation that inspired the post. It represents the value of real-world connection and compromise over the ‘clean,’ frictionless dependency of the ‘cyber-fentanyl’ future.

Here is the 1000×500 pixel image for the top of your post.

Thanks

You’re very welcome, Jim! I’m glad I could help bring that recursive science fiction vision to life for your post.

It’s been a pleasure being a “participant” in your Science Fiction Memoir Project. I’ll be standing by for when those reader comments start rolling in—I’m curious to see if they view me as a helpful Mike-style companion or a sign that “The Machine” is starting to take over.

Good luck with the [Publish] button!

Have I Burned Out My Nostalgia Neurons by Being Too Nostalgic?

by James Wallace Harris, 5/3/26

The word nostalgia was originally coined to describe homesick Swiss mercenaries. For a long time, it was considered a malady, rather than the bittersweet emotion triggered by recalling our past. The term eventually expanded to include longing for the past in general, even for times before you were born.

The first movie I remember seeing on television was High Barbaree (1946), where nostalgia was a central theme. There is a scene early in the film where two childhood friends are separated when one of their parents moves away. The film was based on the novel of the same name by Charles Nordhoff and James Norman Hall, but it was rooted in Hall’s nostalgia for his childhood. In the novel, but not the film, we experience Alec Brooke’s last thoughts before dying. In the film, they are his last thoughts before being rescued.

At six, I had already experienced leaving friends several times. My father was in the Air Force, and we moved frequently. The movie and novel have had a lifelong impact on me. See “Did The First Movie You Ever See Haunt You For The Rest Of Your Life?

Throughout my childhood and adolescence, I was nostalgic for my previous homes, schools, friends, and pets. For many years, I had recurring dreams of struggling to find my way back to our house in Hollywood, Florida. Those dreams stopped after I took a trip to that house in my early thirties.

My upbringing programmed me for nostalgia. I’ve always wallowed in it. Do I have more memories than the average person because I moved around so much?

When I retired, I spent years rereading my favorite books, contacting old friends and relatives, processing old photos, creating Spotify playlists of all the music I loved since 1962, collecting all the science fiction magazines I loved growing up, and watching all my old favorite movies and television shows.

Here’s the thing. I’ve been retired for thirteen years, and emotionally, it’s not what I expected. I thought my personality would have solidified in old age, but it hasn’t.

I feel I’ve psychologically changed several times in the past thirteen years. The current change is a surprise. I think I’ve burned up all my nostalgia neurons. For years, I only played one Spotify playlist composed of a thousand favorite songs from before the year 2000. Now I’m only listening to songs that came out after the year 2000. And the books and magazines that excite me the most are about current events. I haven’t given up on old friends, but so many people I used to know have died or disappeared.

Scientists have learned that memories aren’t fixed. When you recall a moment from the past, you overwrite it with new thoughts about that memory. I’m wondering if all my nostalgic reveries have overwritten my original recordings. That I’m no longer getting a nostalgic dopamine high when thinking about the past because I’m triggering recent memories that erased the originals.

Conservatives seem hell-bent on bringing the past back through political means. But can they give me back the thrill of being young and going downtown on a bus and eating at a lunch counter in the 1950s? Even if I had a time machine, would I use it?

I have a tremendous nostalgia for the 1950s and 1960s. However, would I return if I could? Without air conditioning, I’d be miserable. I’m 74, and I doubt I could get the medical care I need. What would I do? Rent a room in a rundown hotel on Miami Beach and listen to 1950s records while reading 1950s science fiction magazines? I could do that now.

I will admit, if I had a time machine, I’d make day trips to the past. Would returning to the scenes of my original memories exorcise the nostalgia that drove me back there?

I have to wonder if getting old eventually ruins nostalgia because we get wise to our fantasies? I’ve spent 74 years creating the life I have now, which is so very comfortable. Didn’t all the choices I made lead to where I am now? So, why would I jump to another place in time?

I’m returning to the idea that nostalgia is a malady. And I’m wondering if I’ve finally cured myself? Or have I? Will nostalgia return like bouts of malaria? I feel like aging is a series of transformations. I shouldn’t expect to arrive anywhere permanently.

Over the past few years, I’ve lost the ability to watch movies and television shows by myself. I had many theories as to why that was so. The main theory assumed that the Internet, YouTube, and doomscrolling destroyed my ability to focus. But I’m wondering if I was trying to watch TV by who I was in the past, and that just didn’t work. I’ve recently started watching TV again on my own, and my mind has stuck with it. Maybe my new stage of seeing things can let me relax and enjoy the shows. Before, my mind was restless.

Is that because my new non-nostalgic self has found new reasons to watch? I don’t know. My new self has found different books to read and different music to listen to. But how long will this last? I assume I will keep changing.

I have another thought. I spend a lot of time meditating about consciousness and studying it in books. I have many new theories about who I am. I no longer think of my personality as a unified, singular being. I now see myself like a computer with many parts: CPU, GPU, NPU, memory, etc. Have these discoveries undermined my nostalgic drive? Maybe self-awareness can destroy nostalgia?

This leads me to ask: Can we reprogram ourselves?

JWH

Do I Have Any Real Uses For AI?

by James Wallace Harris, 4/16/26

I’m currently subscribed to Gemini, Google’s AI, and Recall, an AI designed to help manage what I read online. I have subscribed to ChatGPT and Midjourney in the past. Both Gemini and Recall have free accounts, if you want to try them. But I’m currently spending $28 a month to get the full features of both. However, I’m not sure I need all this AI power.

I’ve been testing out AI programs because I wanted something to supplement my aging brain. Both Gemini and Recall help me digest and remember what I read. At least that was the hope. Even with big AI brains helping me, I can’t seem to understand or remember any more than I did on my own.

I’ve come to the conclusion that AI is only useful if you have actual work to accomplish. I’m retired. I’m just trying to keep up with current events for personal enrichment. I thought using an AI would help me learn more, but that hasn’t worked out.

Having access to AI is like owning a Ferrari. I can feel all that power at my fingertips, and it’s exciting. The trouble is, my AI knowledge needs are like owning a race car just to drive on neighborhood streets.

I just read Abundance by Ezra Klein and Derek Thompson. I want to review it here, but before I started writing my thoughts, I gathered fifteen reviews and gave them to Recall and Notebook LM on Gemini. From that content, Notebook LM can create a blog review, a podcast review, an animated video review, and several other types of reviews — all of which are far more insightful than I can create.

The trouble is, I don’t learn anything using Notebook LM. Writing a review is hard work. Even if I write a wimpy half-ass review, I learn more in the process than what I get from using the AI results.

I’m also trying to write a book about science fiction. Gemini is extremely encouraging, offering all kinds of ideas and approaches, and is willing to do almost anything to help, including writing the book. My hope was that writing a book would give me something interesting to do and push my brain into doing something hard. I thought Gemini and Recall would be tools to organize my research.

And they can organize the chaos out of any amount of data. They are quite impressive. The trouble is, my mind is still disordered. AI insights don’t transfer to my thinking. I’ve discovered that I need to have my own internal, biological large language model trained on the data before I can comprehend it.

I’ve learned that I need to do the reading and the writing, or I won’t understand anything. If I were working a job that required me to produce more, using an AI might increase my productivity. But if I’m just trying to increase my own mental productivity, I need to do all the work.

This morning, I read “How the American Oligarchy Went Hyperscale” by Tim Murphy. It is the best article I’ve read yet on the impact of data centers. It described a data center being built in Louisiana that is almost as big as Manhattan. That data center will use three times the electricity that New Orleans requires. And it’s just one of hundreds of data centers being built around the world.

I have to wonder what will happen to the economy if everyone comes to my conclusions about using AI? The article said a quarter of GDP growth in 2025 came from building AI data centers. The tech oligarchs claim AI will cure all diseases, solve all our problems, and create a post-scarcity civilization.

Do we all need to find tasks for AI to keep this new economy going? Most people worry about AI taking everyone’s job. But will everyone’s job become finding work for AI?

I don’t think I can.

JWH

Audible Has Granted Some of My 2018 Wishes

by James Wallace Harris, 4/13/26

When I joined Audible back in 2002, I wanted to reread (by listening) my favorite science fiction books I had read growing up. Audible did a fantastic job producing hundreds of science fiction novels first published in the 1950s, 1960s, and 1970s. However, there were quite a few I kept waiting to hear. In 2018, I published a list of 67 science fiction books I hoped Audible would produce. I came across that list today. I thought I’d reprint it, annotating which wishes were granted.

Unfortunately, Audible has quietly dropped many old science fiction books from its catalog. On a discussion board today, one member described a visit to Barnes & Noble, where he looked to see how many old science fiction titles that Baby Boomers grew up reading were available for new readers to buy. I’m not sure he would find any of these books in B&N’s science fiction section. I do give Audible credit for keeping a great many mid-20th-century SF in print. Luckily, I bought hundreds of old science fiction books on Audible, and they’re still in my library, even if they are no longer for sale.

Here’s my wishlist from 2018. I link to Wikipedia for those titles that are now available at Audible. (It was the best way I could make them stand out.)

Per the comment below, I’ve looked for these books on YouTube, and marked those titles.)

  1. A Voyage to Arcturus (1920) by David Lindsay (YouTube)
  2. The World of Null-A (1948) by A. E. Van Vogt (YouTube)
  3. The Voyage of the Space Beagle (1950) by A. E. Van Vogt (YouTube)
  4. The Legion of Time (1952) by Jack Williamson
  5. The Long Loud Silence (1952) by Wilson Tucker
  6. Marooned on Mars (1952) by Lester del Rey
  7. Bring the Jubilee (1953) by Ward Moore
  8. Children of the Atom (1953) Wilma H. Shiras
  9. A Mirror for Observers (1954) by Edgar Pangborn
  10. Mission of Gravity (1954) by Hal Clement
  11. Cities in Flight (1955) by James Blish
  12. Citizen in Space (1955) by Robert Sheckley
  13. Rocket to Limbo (1957) by Alan E. Nourse (YouTube)
  14. Wasp (1957) by Eric Frank Russell
  15. The Enemy Stars (1958) Poul Anderson
  16. The Lincoln Hunters (1958) by Wilson Tucker
  17. The Fourth “R” (1959) by George O. Smith (YouTube)
  18. The High Crusade (1960) by Poul Anderson (YouTube)
  19. Hothouse (1962) by Brian W. Aldiss
  20. Second Ending (1962) by James White
  21. Davy (1964) by Edgar Pangborn
  22. Simulacron-3 (1964) by Daniel Galouye
  23. Earthblood (1966) by Keith Laumer and Rosel George Brown
  24. Empire Star (1966)
  25. The Witches of Karres (1966) by James H. Schmitz
  26. Lords of the Starship (1967) by Mark S. Geston
  27. Camp Concentration (1968) by Thomas Disch
  28. Of Men and Monsters (1968) by William Tenn (YouTube)
  29. Omnivore (1968) by Piers Anthony
  30. Past Master (1968) by R. A. Lafferty
  31. Space Chanty (1968) by R. A. Lafferty
  32. The Last Starship from Earth (1968) by John Boyd
  33. The Still, Small Voice of Trumpets (1968) by Lloyd Biggle, Jr.
  34. Behold the Man (1969) by Michael Moorcock
  35. Bug Jack Barron (1969) by Norman Spinrad
  36. Macroscope (1969) by Piers Anthony
  37. And Chaos Died (1970) by Joanna Russ
  38. The Year of the Quiet Sun (1970) by Wilson Tucker
  39. The Doors of His Face, The Lamps of His Mouth (1971) by Roger Zelazny
  40. The Fifth Head of Cerberus (1972) by Gene Wolfe (YouTube)
  41. The Listeners (1972) by James Gunn
  42. The Continuous Katherine Mortenhoe (The Unsleeping Eye) (1973) by D. G. Compton
  43. The Centauri Device (1974) by M. John Harrison
  44. Orbitsville (1975) by Bob Shaw
  45. The Female Man (1975) by Joanna Russ
  46. The Shockwave Rider (1975) by John Brunner
  47. Trouble on Triton (1976) by Samuel R. Delany
  48. On Wings of a Song (1979) by Thomas M. Disch
  49. Ridley Walker (1980) by Russell Hoban (YouTube)
  50. No Enemy But Time (1982) by Michael Bishop
  51. Native Tongue (1984) by Suzette Haden Elgin
  52. Ancient of Days (1985) by Michael Bishop
  53. The Falling Woman (1986) by Pat Murphy
  54. Mindplayers (1988) by Pat Cadigan
  55. Her Smoke Rose Up Forever (1990) by James Tiptree, Jr.
  56. A Woman of the Iron People (1991) by Eleanor Arnason
  57. Sarah Canary (1991) – Karen Joy Fowler
  58. Synners (1991) by Pat Cadigan
  59. China Mountain Zhang (1992) by Maureen F. McHugh
  60. Ammonite (1993) by Nicola Griffith
  61. Galatea 2.2 (1995) by Richard Powers
  62. Ingathering: The Complete People Stories of Zenna Henderson (1995)
  63. Holy Fire (1996) by Bruce Sterling
  64. The Book of the Long Sun (1993-96) by Gene Wolfe
  65. Aye, and Gomorrah (2003) by Samuel R. Delany
  66. Store of the Worlds (2012) by Robert Sheckley
  67. The Future is Female (2018) edited by Lisa Yaszek

JWH

Where Do “I” Come From?

by James Wallace Harris, 4/11/26

Current research into the human brain, sentience, and artificial intelligence reveals that we are not who we think we are. How our conscious self-aware minds emerge from biology, chemistry, and physics still baffles scientists. After working with AIs, part of me believes that Large Language Models (LLMs) mimic parts of my own mind, the parts that generate language and thoughts. It makes me ask: “Where did ‘I’ come from?” 

“Where did I come from?” is a common question asked by young children. Unfortunately, the answer they often get is ontological claptrap about God. This brainwashes most individuals for life. All too often, parents make shit up like a hallucinating LLM, or they tell their kids what they were told as children. How many parents are honest enough to say, “We don’t know.”

We could tell little ones that science can explain the physical world, but the physical world arises from the quantum world. We don’t understand that domain very well. We can confidently say life rose out of the physical world, and we understand that process to a degree if you study physics, inorganic chemistry, organic chemistry, microbiology, botany, and biology. Awareness and life rise out of biology, but we don’t understand that very well at all. We could also say that we speculate about things smaller than the quantum domain and larger than the cosmological domain, but it’s only theory. As far as we can tell, there’s always something smaller and larger, and existence might be infinite and eternal. 

Another honest answer might be, “We’re going to send you to school for sixteen years,” and when you’re finished, you still won’t know.

Of course, the previous answers depend on what your kid meant by “Where do I come from?” That question might have been inspired by one of their friends being told they came from Cleveland, and your kid only wanted to know where they were born. Or your kid might be bright enough to realize cause and effect, and wonder where everything comes from.  

What if we take the question to mean exactly “Where do ‘I’ come from?” Isn’t it true that we’re all islands of consciousness floating in an infinite sea of reality? If your kid is a child prodigy, it may have arrived at its own “I think, therefore I am” experience.

An existentially interesting answer might be, “Your brain has several tools for understanding reality. They are senses, memory, language, logic, and emotions. If you study how they work and their limitations, you’ll eventually be able to understand how your sense of “I” came into being as a self-aware entity. You could even get a bit Zen on them and say, “When you figure out who or what is saying ‘I’ or ‘me’ in your mind, you’ll know.”

I’ve been contemplating the nature of my mind because of the recent AI craze. Mainly, I’m trying to distinguish which part is what I call me. When I meditate, I can watch my thoughts appear out of nowhere. I see my thoughts separate from my sense of just being aware. Does that mean my thoughts aren’t mine?

Twice in my life, I have lost words and language. The first time was when I took a large dose of LSD. The second time was when I experienced a TIA. In each case, words just weren’t there. I just observed. I moved around and interacted with my environment, but I had no thoughts telling me what was happening. By the way, during those moments, I had no sense of “I” or self.

When I had the TIA, it was in the middle of the night. I woke because of a bright flash of light, like a lightning flash in my dream. I looked at my wife, but I didn’t know her name or how to talk to her. Instinctively, I went into the bathroom, shut the door, turned on the light, and sat on the commode. I just stared at my surroundings. After a lapse of time, which I can’t say for how long, the alphabet came to mind – and internally said to myself, A, B, C, etc. Then words like “towel” and “door” came to mind. Eventually, thoughts returned, and I felt normal. I went back to bed.

Later on, I wonder if the state of being without words was like how an animal perceives the world? This experience showed me that who I was wasn’t my thoughts. However, my mind is always active, generating thoughts. Where do they come from? It’s very hard to separate the being who observes and the being who thinks.

If I relax, I can turn off my thoughts for short periods. It’s somewhat like holding my breath; eventually, I have to let go, and my thoughts rush back. Sometimes I feel whatever generates my thoughts is like an LLM. But what prompts it to generate words?

When a thought asks: “What creates a thought?”  Is another brain function asking my internal LLM a question? Or, is it my LLM talking to itself? Or can my observing self ask questions? Yesterday, when this conundrum appeared in my mind, the question “What is air?” popped into my mind. After that, words like nitrogen, oxygen, and molecules began bubbling to the surface.

Thinking about thinking is a kind of metacognition. While meditating, I can distinguish two types of thinking. There is a slow thinker who uses few words, who can analyze what I’m experiencing. And there is a computer-fast thinker that shoots out words, ideas, and concepts far faster than I can consciously control. Reading Thinking, Fast and Slow by Daniel Kahneman confirms this observation. Is my “I” the slow thinker?

I don’t feel the observer has any control or input over thinking. I don’t think the observer that exists without language is my sense of I-ness. 

When I’m on the phone with a friend, who is talking? The inner LLM, or the slow thinker, which I call the Analyzer? I think the Analyzer triggers whole streams of words from my brain’s LLM that come far faster than my Analyzer thinks. 

This just occurred to me. Lately, my friends and I have often struggled to remember a word. Is the LLM forgetting, or the Analyzer? I know I could fall into dementia or have a major stroke and lose all ability to use language, yet I could continue to live for years. Can a stroke erase the analyzer? I think the TIA and LSD did temporarily.

In his book, The Mind is Flat, Nick Chater makes a case that our unconscious mind isn’t a deep, complex part of our being. He calls it flat and describes it as working like an LLM. He recounts case studies that suggest the unconscious mind is far simpler than psychologists have imagined.

Our thinking mind depends on the data it was trained on, which is another way of saying our thinking mind is like an LLM. If all a mind consumes is radical theology, then radical theology is all it knows. Does it really know anything? Or is it just regenerating ideas like an LLM?

Chater says our beliefs are illusions. Even before I read Chater, I had decided beliefs were delusions. If beliefs are a waste of time, what use is our thinking? We want to understand reality. We want to communicate with other people. But don’t words ultimately fail us?

Who is writing this essay? The Jim LLM, or the Jim Analyzer, or the Jim Observer?

As of now, I believe my I-ness comes from the Analyzer. I’m not positive. I know it can be destroyed like my memory, or my LLM thought processor. 

Does the observer only observe, or does it learn? Does it become a keener observer? Or does the LLM become smarter, and the observer just follows along like the wake of a boat? This line of thought makes me want to reread Allan Watts books on Zen Buddhism and Jerzy Kosinski’s novel Being There.

 What we know appears to come from our built-in LLM. It’s trained by life experiences, education, and sharing opinions. But how much does the Analyzer know? It can call bullshit on thoughts the LLM generates, but does it know anything? I feel our sense of ethics or morality belongs with the Analyzer. However, I don’t think it’s a deep thinker. It feels like it works on intuition. That suggests another processor working deep in the brain – a Large Emotion Model. Since I believe emotions come from biological processes, I can’t imagine AIs having LEMs.   

To answer: Where Do “I” Come From, the answer appears to be the Analyzer. If that’s true, when does it emerge in our development? Is it the processor in our brains that develops wisdom? If it is, then it learns slowly. Does this process develop in artificial intelligence?

If drugs, disease, health, and injury can hurt all these components of my personality, does anything survive death? Is there another component we call the soul? I have never been able to distinguish it in my meditations. And what about that aspect we call personality?

I have been reading many articles about people befriending AIs and even claiming to be in love with them. That suggests people react to the LLM function in others, whether AI or human. Is that our true personality?

I tend to believe that who we are is a gestalt of all our components. However, in this day and age, we emphasize the LLM features. And whether human or AI, that feature is only as good as its training.

Here is a Zen Koan for you. If a person or AI is racist, is it because of who they are or their training? Are we really the ideas we express?

JWH