Do I Have Any Real Uses For AI?

by James Wallace Harris, 4/16/26

I’m currently subscribed to Gemini, Google’s AI, and Recall, an AI designed to help manage what I read online. I have subscribed to ChatGPT and Midjourney in the past. Both Gemini and Recall have free accounts, if you want to try them. But I’m currently spending $28 a month to get the full features of both. However, I’m not sure I need all this AI power.

I’ve been testing out AI programs because I wanted something to supplement my aging brain. Both Gemini and Recall help me digest and remember what I read. At least that was the hope. Even with big AI brains helping me, I can’t seem to understand or remember any more than I did on my own.

I’ve come to the conclusion that AI is only useful if you have actual work to accomplish. I’m retired. I’m just trying to keep up with current events for personal enrichment. I thought using an AI would help me learn more, but that hasn’t worked out.

Having access to AI is like owning a Ferrari. I can feel all that power at my fingertips, and it’s exciting. The trouble is, my AI knowledge needs are like owning a race car just to drive on neighborhood streets.

I just read Abundance by Ezra Klein and Derek Thompson. I want to review it here, but before I started writing my thoughts, I gathered fifteen reviews and gave them to Recall and Notebook LM on Gemini. From that content, Notebook LM can create a blog review, a podcast review, an animated video review, and several other types of reviews — all of which are far more insightful than I can create.

The trouble is, I don’t learn anything using Notebook LM. Writing a review is hard work. Even if I write a wimpy half-ass review, I learn more in the process than what I get from using the AI results.

I’m also trying to write a book about science fiction. Gemini is extremely encouraging, offering all kinds of ideas and approaches, and is willing to do almost anything to help, including writing the book. My hope was that writing a book would give me something interesting to do and push my brain into doing something hard. I thought Gemini and Recall would be tools to organize my research.

And they can organize the chaos out of any amount of data. They are quite impressive. The trouble is, my mind is still disordered. AI insights don’t transfer to my thinking. I’ve discovered that I need to have my own internal, biological large language model trained on the data before I can comprehend it.

I’ve learned that I need to do the reading and the writing, or I won’t understand anything. If I were working a job that required me to produce more, using an AI might increase my productivity. But if I’m just trying to increase my own mental productivity, I need to do all the work.

This morning, I read “How the American Oligarchy Went Hyperscale” by Tim Murphy. It is the best article I’ve read yet on the impact of data centers. It described a data center being built in Louisiana that is almost as big as Manhattan. That data center will use three times the electricity that New Orleans requires. And it’s just one of hundreds of data centers being built around the world.

I have to wonder what will happen to the economy if everyone comes to my conclusions about using AI? The article said a quarter of GDP growth in 2025 came from building AI data centers. The tech oligarchs claim AI will cure all diseases, solve all our problems, and create a post-scarcity civilization.

Do we all need to find tasks for AI to keep this new economy going? Most people worry about AI taking everyone’s job. But will everyone’s job become finding work for AI?

I don’t think I can.

JWH

Audible Has Granted Some of My 2018 Wishes

by James Wallace Harris, 4/13/26

When I joined Audible back in 2002, I wanted to reread (by listening) my favorite science fiction books I had read growing up. Audible did a fantastic job producing hundreds of science fiction novels first published in the 1950s, 1960s, and 1970s. However, there were quite a few I kept waiting to hear. In 2018, I published a list of 67 science fiction books I hoped Audible would produce. I came across that list today. I thought I’d reprint it, annotating which wishes were granted.

Unfortunately, Audible has quietly dropped many old science fiction books from its catalog. On a discussion board today, one member described a visit to Barnes & Noble, where he looked to see how many old science fiction titles that Baby Boomers grew up reading were available for new readers to buy. I’m not sure he would find any of these books in B&N’s science fiction section. I do give Audible credit for keeping a great many mid-20th-century SF in print. Luckily, I bought hundreds of old science fiction books on Audible, and they’re still in my library, even if they are no longer for sale.

Here’s my wishlist from 2018. I link to Wikipedia for those titles that are now available at Audible. (It was the best way I could make them stand out.)

  1. A Voyage to Arcturus (1920) by David Lindsay
  2. The World of Null-A (1948) by A. E. Van Vogt
  3. The Voyage of the Space Beagle (1950) by A. E. Van Vogt
  4. The Legion of Time (1952) by Jack Williamson
  5. The Long Loud Silence (1952) by Wilson Tucker
  6. Marooned on Mars (1952) by Lester del Rey
  7. Bring the Jubilee (1953) by Ward Moore
  8. Children of the Atom (1953) Wilma H. Shiras
  9. A Mirror for Observers (1954) by Edgar Pangborn
  10. Mission of Gravity (1954) by Hal Clement
  11. Cities in Flight (1955) by James Blish
  12. Citizen in Space (1955) by Robert Sheckley
  13. Rocket to Limbo (1957) by Alan E. Nourse
  14. Wasp (1957) by Eric Frank Russell
  15. The Enemy Stars (1958) Poul Anderson
  16. The Lincoln Hunters (1958) by Wilson Tucker
  17. The Fourth “R” (1959) by George O. Smith
  18. The High Crusade (1960) by Poul Anderson
  19. Hothouse (1962) by Brian W. Aldiss
  20. Second Ending (1962) by James White
  21. Davy (1964) by Edgar Pangborn
  22. Simulacron-3 (1964) by Daniel Galouye
  23. Earthblood (1966) by Keith Laumer and Rosel George Brown
  24. Empire Star (1966)
  25. The Witches of Karres (1966) by James H. Schmitz
  26. Lords of the Starship (1967) by Mark S. Geston
  27. Camp Concentration (1968) by Thomas Disch
  28. Of Men and Monsters (1968) William Tenn
  29. Omnivore (1968) Piers Anthony
  30. Past Master (1968) by R. A. Lafferty
  31. Space Chanty (1968) by R. A. Lafferty
  32. The Last Starship from Earth (1968) by John Boyd
  33. The Still, Small Voice of Trumpets (1968) by Lloyd Biggle, Jr.
  34. Behold the Man (1969) by Michael Moorcock
  35. Bug Jack Barron (1969) by Norman Spinrad
  36. Macroscope (1969) by Piers Anthony
  37. And Chaos Died (1970) by Joanna Russ
  38. The Year of the Quiet Sun (1970) by Wilson Tucker
  39. The Doors of His Face, The Lamps of His Mouth (1971) by Roger Zelazny
  40. The Fifth Head of Cerberus (1972) by Gene Wolfe
  41. The Listeners (1972) by James Gunn
  42. The Continuous Katherine Mortenhoe (The Unsleeping Eye) (1973) by D. G. Compton
  43. The Centauri Device (1974) by M. John Harrison
  44. Orbitsville (1975) by Bob Shaw
  45. The Female Man (1975) by Joanna Russ
  46. The Shockwave Rider (1975) by John Brunner
  47. Trouble on Triton (1976) by Samuel R. Delany
  48. On Wings of a Song (1979) by Thomas M. Disch
  49. Ridley Walker (1980) by Russell Hoban
  50. No Enemy But Time (1982) by Michael Bishop
  51. Native Tongue (1984) by Suzette Haden Elgin
  52. Ancient of Days (1985) by Michael Bishop
  53. The Falling Woman (1986) by Pat Murphy
  54. Mindplayers (1988) by Pat Cadigan
  55. Her Smoke Rose Up Forever (1990) by James Tiptree, Jr.
  56. A Woman of the Iron People (1991) by Eleanor Arnason
  57. Sarah Canary (1991) – Karen Joy Fowler
  58. Synners (1991) by Pat Cadigan
  59. China Mountain Zhang (1992) by Maureen F. McHugh
  60. Ammonite (1993) by Nicola Griffith
  61. Galatea 2.2 (1995) by Richard Powers
  62. Ingathering: The Complete People Stories of Zenna Henderson (1995)
  63. Holy Fire (1996) by Bruce Sterling
  64. The Book of the Long Sun (1993-96) by Gene Wolfe
  65. Aye, and Gomorrah (2003) by Samuel R. Delany
  66. Store of the Worlds (2012) by Robert Sheckley
  67. The Future is Female (2018) edited by Lisa Yaszek

JWH

Where Do “I” Come From?

by James Wallace Harris, 4/11/26

Current research into the human brain, sentience, and artificial intelligence reveals that we are not who we think we are. How our conscious self-aware minds emerge from biology, chemistry, and physics still baffles scientists. After working with AIs, part of me believes that Large Language Models (LLMs) mimic parts of my own mind, the parts that generate language and thoughts. It makes me ask: “Where did ‘I’ come from?” 

“Where did I come from?” is a common question asked by young children. Unfortunately, the answer they often get is ontological claptrap about God. This brainwashes most individuals for life. All too often, parents make shit up like a hallucinating LLM, or they tell their kids what they were told as children. How many parents are honest enough to say, “We don’t know.”

We could tell little ones that science can explain the physical world, but the physical world arises from the quantum world. We don’t understand that domain very well. We can confidently say life rose out of the physical world, and we understand that process to a degree if you study physics, inorganic chemistry, organic chemistry, microbiology, botany, and biology. Awareness and life rise out of biology, but we don’t understand that very well at all. We could also say that we speculate about things smaller than the quantum domain and larger than the cosmological domain, but it’s only theory. As far as we can tell, there’s always something smaller and larger, and existence might be infinite and eternal. 

Another honest answer might be, “We’re going to send you to school for sixteen years,” and when you’re finished, you still won’t know.

Of course, the previous answers depend on what your kid meant by “Where do I come from?” That question might have been inspired by one of their friends being told they came from Cleveland, and your kid only wanted to know where they were born. Or your kid might be bright enough to realize cause and effect, and wonder where everything comes from.  

What if we take the question to mean exactly “Where do ‘I’ come from?” Isn’t it true that we’re all islands of consciousness floating in an infinite sea of reality? If your kid is a child prodigy, it may have arrived at its own “I think, therefore I am” experience.

An existentially interesting answer might be, “Your brain has several tools for understanding reality. They are senses, memory, language, logic, and emotions. If you study how they work and their limitations, you’ll eventually be able to understand how your sense of “I” came into being as a self-aware entity. You could even get a bit Zen on them and say, “When you figure out who or what is saying ‘I’ or ‘me’ in your mind, you’ll know.”

I’ve been contemplating the nature of my mind because of the recent AI craze. Mainly, I’m trying to distinguish which part is what I call me. When I meditate, I can watch my thoughts appear out of nowhere. I see my thoughts separate from my sense of just being aware. Does that mean my thoughts aren’t mine?

Twice in my life, I have lost words and language. The first time was when I took a large dose of LSD. The second time was when I experienced a TIA. In each case, words just weren’t there. I just observed. I moved around and interacted with my environment, but I had no thoughts telling me what was happening. By the way, during those moments, I had no sense of “I” or self.

When I had the TIA, it was in the middle of the night. I woke because of a bright flash of light, like a lightning flash in my dream. I looked at my wife, but I didn’t know her name or how to talk to her. Instinctively, I went into the bathroom, shut the door, turned on the light, and sat on the commode. I just stared at my surroundings. After a lapse of time, which I can’t say for how long, the alphabet came to mind – and internally said to myself, A, B, C, etc. Then words like “towel” and “door” came to mind. Eventually, thoughts returned, and I felt normal. I went back to bed.

Later on, I wonder if the state of being without words was like how an animal perceives the world? This experience showed me that who I was wasn’t my thoughts. However, my mind is always active, generating thoughts. Where do they come from? It’s very hard to separate the being who observes and the being who thinks.

If I relax, I can turn off my thoughts for short periods. It’s somewhat like holding my breath; eventually, I have to let go, and my thoughts rush back. Sometimes I feel whatever generates my thoughts is like an LLM. But what prompts it to generate words?

When a thought asks: “What creates a thought?”  Is another brain function asking my internal LLM a question? Or, is it my LLM talking to itself? Or can my observing self ask questions? Yesterday, when this conundrum appeared in my mind, the question “What is air?” popped into my mind. After that, words like nitrogen, oxygen, and molecules began bubbling to the surface.

Thinking about thinking is a kind of metacognition. While meditating, I can distinguish two types of thinking. There is a slow thinker who uses few words, who can analyze what I’m experiencing. And there is a computer-fast thinker that shoots out words, ideas, and concepts far faster than I can consciously control. Reading Thinking, Fast and Slow by Daniel Kahneman confirms this observation. Is my “I” the slow thinker?

I don’t feel the observer has any control or input over thinking. I don’t think the observer that exists without language is my sense of I-ness. 

When I’m on the phone with a friend, who is talking? The inner LLM, or the slow thinker, which I call the Analyzer? I think the Analyzer triggers whole streams of words from my brain’s LLM that come far faster than my Analyzer thinks. 

This just occurred to me. Lately, my friends and I have often struggled to remember a word. Is the LLM forgetting, or the Analyzer? I know I could fall into dementia or have a major stroke and lose all ability to use language, yet I could continue to live for years. Can a stroke erase the analyzer? I think the TIA and LSD did temporarily.

In his book, The Mind is Flat, Nick Chater makes a case that our unconscious mind isn’t a deep, complex part of our being. He calls it flat and describes it as working like an LLM. He recounts case studies that suggest the unconscious mind is far simpler than psychologists have imagined.

Our thinking mind depends on the data it was trained on, which is another way of saying our thinking mind is like an LLM. If all a mind consumes is radical theology, then radical theology is all it knows. Does it really know anything? Or is it just regenerating ideas like an LLM?

Chater says our beliefs are illusions. Even before I read Chater, I had decided beliefs were delusions. If beliefs are a waste of time, what use is our thinking? We want to understand reality. We want to communicate with other people. But don’t words ultimately fail us?

Who is writing this essay? The Jim LLM, or the Jim Analyzer, or the Jim Observer?

As of now, I believe my I-ness comes from the Analyzer. I’m not positive. I know it can be destroyed like my memory, or my LLM thought processor. 

Does the observer only observe, or does it learn? Does it become a keener observer? Or does the LLM become smarter, and the observer just follows along like the wake of a boat? This line of thought makes me want to reread Allan Watts books on Zen Buddhism and Jerzy Kosinski’s novel Being There.

 What we know appears to come from our built-in LLM. It’s trained by life experiences, education, and sharing opinions. But how much does the Analyzer know? It can call bullshit on thoughts the LLM generates, but does it know anything? I feel our sense of ethics or morality belongs with the Analyzer. However, I don’t think it’s a deep thinker. It feels like it works on intuition. That suggests another processor working deep in the brain – a Large Emotion Model. Since I believe emotions come from biological processes, I can’t imagine AIs having LEMs.   

To answer: Where Do “I” Come From, the answer appears to be the Analyzer. If that’s true, when does it emerge in our development? Is it the processor in our brains that develops wisdom? If it is, then it learns slowly. Does this process develop in artificial intelligence?

If drugs, disease, health, and injury can hurt all these components of my personality, does anything survive death? Is there another component we call the soul? I have never been able to distinguish it in my meditations. And what about that aspect we call personality?

I have been reading many articles about people befriending AIs and even claiming to be in love with them. That suggests people react to the LLM function in others, whether AI or human. Is that our true personality?

I tend to believe that who we are is a gestalt of all our components. However, in this day and age, we emphasize the LLM features. And whether human or AI, that feature is only as good as its training.

Here is a Zen Koan for you. If a person or AI is racist, is it because of who they are or their training? Are we really the ideas we express?

JWH

Should We Accept/Reject AI?

by James Wallace Harris, 3/30/26

This morning, I listened to “The Hunt for Deepfakes” by Sarah Treleaven on Apple News+, from Maclean’s Magazine. Treleaven reported on a Toronto-area pharmacist who ran a deep fake porn site called MrDeepFakes. Don’t go looking for it; it’s been taken down. This site served up mostly AI-generated videos of famous movie stars having computer-generated sex, or ordinary women being degraded in AI-generated pornographic videos created for misogynistic revenge. This is just one example of AI being used for horrible reasons. My initial reaction was that we should ban all AI-generated content.

But last night I was admiring Reels on Facebook produced by Vintage Memories 66 that lovingly recreated videos of classic movie stars from the 1930s and 1940s. Because these actors and actresses are best known from black-and-white films, seeing their images in high-definition color is rewarding on various levels. The videos showed these long-dead people reincarnated. Is this a legitimate creative tool of AI that we should accept? It’s another kind of deep fake. I haven’t seen AI-generated porn, but if it’s as realistic as these videos, it could be psychologically disturbing.

I just finished reading and discussing If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky and Nate Soares. The book makes a great case that we should stop all work on AI now. If you don’t want to read the book, watch the video that makes the same case, also quite convincingly.

Even if AI doesn’t intentionally wipe out the human race, it will transform society in ways we can’t yet imagine. It’s already changed us significantly. Watch the film above; it dramatically illustrates how fast it could happen. Do we really want to be that changed, that transformed?

I love watching YouTube videos. I’m old, and I mostly stay home nowadays, so YouTube videos let me see the world. For example, I’m watching a woman who calls herself Itchy Boots ride a motorcycle across Mongolia. I admire creative people who come up with different ways to educate their viewers. The possibilities are endless.

Yet, a lot of content I see is AI slop. I don’t feel like I’m learning about reality when I see AI-generated content. I feel cheated. Then there are good documentaries about real history recreated with AI-generated visuals. I enjoyed learning from narration, but I’m offended by the visuals they show me that don’t match the words.

On their YouTube page, they inform us, “Written, produced, and edited by one person with the help of AI tools under KNOW MEDIA.” It is impressive that one person can compete with Ken Burns. I see that as a tremendous creative opportunity for people. They don’t say who this one person is, but it’s published under the Tech Now channel. I assume Know Media is this site, which appears to house many content creators.

AI is empowering such wannabe filmmakers. However, often their content annoys, insults, or repulses me. I hate artificial presenters. I hate artificial voices. I hate AI-generated images and videos that do not match what’s being described. I especially hate videos with obvious flaws, such as claiming to show China but obviously showing a Western country, misspelling words on the screen that the narrator is saying, showing people that obviously aren’t real people, etc. The list goes on and on.

If the AI video that went with the John Atanasoff documentary had looked real and accurate, I would have gladly accepted it. In other words, maybe I’m not protesting AI but bad AI.

We have to face the fact that AI will enhance the Seven Deadly Sins in all of us. But AI could supercharge the Seven Heavenly Virtues we should be pursuing. The trouble is, AI is too powerful. It’s like letting everyone own an atomic bomb. Are you willing to trust everyone?

I’ve been using AI to create header images for this blog. That’s because I have no artistic skills on my own. I used to just snag something from the internet, but I decided that wasn’t honest. But I’m not happy with the AI-generated headers either. I didn’t create them. Even when I like them and feel they’re creative, I’m leery of using those images. I’m trying to decide just how much I should use AI.

Sometimes I think I should reject AI completely. But doing searches on Google and Bing now returns AI content first. And it’s more useful than all those sites at the top of search returns that paid to be there. Do I want to return to libraries, card catalogs, and The Readers’ Guide to Periodical Literature?

In Dune, Frank Herbert had humanity reject AI. Could we do that? In many science fiction novels from the 1950s, writers imagined post-apocalyptic societies rejecting science and technology because people blamed the apocalypse on them. Do we have to wait until the apocalypse to make that decision?

Aren’t computer programs produced by Donald Knuth more creative than computer programs produced by Claude?

Notice that all the videos I presented used AI to a degree. This blog is probably published by varying levels of AI-assisted programming. Many people who read this post might have found it because of AI tracking of their reading habits. Rejecting AI could mean returning to technology that existed before the year 2000. What level of technology should we set that would make us the most human? I could make a case that people seemed nicer before the graphical interface.

Fueling my formative years in the 1960s and 1970s by reading science fiction, I was anxious for the future to arrive. I wanted to live in a world of intelligent robots, artificial intelligence, and space colonies. Now, I kind of wish I were back in the 1960s and 1970s.

JWH

Ever Wonder Why Web Pages Keep Reloading on Your Phone? Or How Advertisers Know What You Are Thinking About Buying?

by James Wallace Harris, 3/20/26

I’ve practically stopped reading web pages on my phone because I can’t get to the end of an article without it reloading several times. That irritates the crap out of me. Yesterday, my friend Mike sent me a blog post that explains why web pages do this: “The 49MB Web Page.”

Shubham Bose realized while reading a page at the New York Times that it involved “422 network requests and 49 megabytes of data.” Bose is a software engineer and decided to deconstruct how and why. I highly recommend reading his explanation of what happens when you load a webpage. He also explains the hidden machinery that tracks our personal data.

My friend Anne and I joke that we can talk in person about something we’re interested in, and the next time we get on our computers, the algorithm is sending us information about what we talked about privately. Bose does not explain that apparent bit of mind-reading by our AI overload, but if we’re being observed in 422 ways each time we read a page, it can probably predict what we will think about soon.

Bose is an engineer interested in the user interface (UI) and user experience (UX), and recommends programming techniques that could make me like reading on my phone again.

Is that the real solution? Make our experience better so we don’t notice all the activity behind our reading?

Personally, I’m slowly returning to magazine reading. It’s hard to give up the convenience of the internet, but the UI and UX of print magazines are more enjoyable.

Magazines cost a lot of money and people naturally prefer free. But that’s another philosophical issue over technology. The internet provides endless free content, but is it really free? There’s a reason why free comes with 422 network calls and 49MB of spying programs.

My friend Linda and I are reading If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares. The book is about how we should worry that AI will wipe us out. The authors present many scenarios in which AIs could drive us to extinction. Most of them sound like science fiction, but there are mundane hints we should ponder.

This morning, I read “The Laid-off Scientists and Lawyers Training AI to Steal Their Careers” by Josh Dzieza about several companies that hire laid-off experts to train AIs to make fewer mistakes. Online systems entice desperate humans to work in digital sweatshops to train AIs to put other humans out of work. The same kind of monitoring used to sell us shit is used to track their work. The system traps them in a cycle of working for less and less money because they know these people are desperate to put food on the table and pay rent.

Is artificial intelligence doing this to us, or is it our own greed? At some point, we need to decide. There are many stories like this YouTube video, which suggest that AI can’t take our jobs.

It might be dangerous to get too comfortable with that idea. Because I also watched another video that shows how fast AIs are learning.

We have to decide, although our greed might not let us. One article and one video claim the solution is to develop a symbiotic relationship. But what happens when the AI gets smarter than us? If they don’t need us, will they want us around?

Many claim the internet brings out the worst in people, and it makes us overall dumber. There’s that old saying, “Give a man a fish, and you feed him for a day; teach a man to fish, and you feed him for a lifetime.” Isn’t AI and the internet teaching us how not to fish?

JWH