Do I Have Any Real Uses For AI?

by James Wallace Harris, 4/16/26

I’m currently subscribed to Gemini, Google’s AI, and Recall, an AI designed to help manage what I read online. I have subscribed to ChatGPT and Midjourney in the past. Both Gemini and Recall have free accounts, if you want to try them. But I’m currently spending $28 a month to get the full features of both. However, I’m not sure I need all this AI power.

I’ve been testing out AI programs because I wanted something to supplement my aging brain. Both Gemini and Recall help me digest and remember what I read. At least that was the hope. Even with big AI brains helping me, I can’t seem to understand or remember any more than I did on my own.

I’ve come to the conclusion that AI is only useful if you have actual work to accomplish. I’m retired. I’m just trying to keep up with current events for personal enrichment. I thought using an AI would help me learn more, but that hasn’t worked out.

Having access to AI is like owning a Ferrari. I can feel all that power at my fingertips, and it’s exciting. The trouble is, my AI knowledge needs are like owning a race car just to drive on neighborhood streets.

I just read Abundance by Ezra Klein and Derek Thompson. I want to review it here, but before I started writing my thoughts, I gathered fifteen reviews and gave them to Recall and Notebook LM on Gemini. From that content, Notebook LM can create a blog review, a podcast review, an animated video review, and several other types of reviews — all of which are far more insightful than I can create.

The trouble is, I don’t learn anything using Notebook LM. Writing a review is hard work. Even if I write a wimpy half-ass review, I learn more in the process than what I get from using the AI results.

I’m also trying to write a book about science fiction. Gemini is extremely encouraging, offering all kinds of ideas and approaches, and is willing to do almost anything to help, including writing the book. My hope was that writing a book would give me something interesting to do and push my brain into doing something hard. I thought Gemini and Recall would be tools to organize my research.

And they can organize the chaos out of any amount of data. They are quite impressive. The trouble is, my mind is still disordered. AI insights don’t transfer to my thinking. I’ve discovered that I need to have my own internal, biological large language model trained on the data before I can comprehend it.

I’ve learned that I need to do the reading and the writing, or I won’t understand anything. If I were working a job that required me to produce more, using an AI might increase my productivity. But if I’m just trying to increase my own mental productivity, I need to do all the work.

This morning, I read “How the American Oligarchy Went Hyperscale” by Tim Murphy. It is the best article I’ve read yet on the impact of data centers. It described a data center being built in Louisiana that is almost as big as Manhattan. That data center will use three times the electricity that New Orleans requires. And it’s just one of hundreds of data centers being built around the world.

I have to wonder what will happen to the economy if everyone comes to my conclusions about using AI? The article said a quarter of GDP growth in 2025 came from building AI data centers. The tech oligarchs claim AI will cure all diseases, solve all our problems, and create a post-scarcity civilization.

Do we all need to find tasks for AI to keep this new economy going? Most people worry about AI taking everyone’s job. But will everyone’s job become finding work for AI?

I don’t think I can.

JWH

Where Do “I” Come From?

by James Wallace Harris, 4/11/26

Current research into the human brain, sentience, and artificial intelligence reveals that we are not who we think we are. How our conscious self-aware minds emerge from biology, chemistry, and physics still baffles scientists. After working with AIs, part of me believes that Large Language Models (LLMs) mimic parts of my own mind, the parts that generate language and thoughts. It makes me ask: “Where did ‘I’ come from?” 

“Where did I come from?” is a common question asked by young children. Unfortunately, the answer they often get is ontological claptrap about God. This brainwashes most individuals for life. All too often, parents make shit up like a hallucinating LLM, or they tell their kids what they were told as children. How many parents are honest enough to say, “We don’t know.”

We could tell little ones that science can explain the physical world, but the physical world arises from the quantum world. We don’t understand that domain very well. We can confidently say life rose out of the physical world, and we understand that process to a degree if you study physics, inorganic chemistry, organic chemistry, microbiology, botany, and biology. Awareness and life rise out of biology, but we don’t understand that very well at all. We could also say that we speculate about things smaller than the quantum domain and larger than the cosmological domain, but it’s only theory. As far as we can tell, there’s always something smaller and larger, and existence might be infinite and eternal. 

Another honest answer might be, “We’re going to send you to school for sixteen years,” and when you’re finished, you still won’t know.

Of course, the previous answers depend on what your kid meant by “Where do I come from?” That question might have been inspired by one of their friends being told they came from Cleveland, and your kid only wanted to know where they were born. Or your kid might be bright enough to realize cause and effect, and wonder where everything comes from.  

What if we take the question to mean exactly “Where do ‘I’ come from?” Isn’t it true that we’re all islands of consciousness floating in an infinite sea of reality? If your kid is a child prodigy, it may have arrived at its own “I think, therefore I am” experience.

An existentially interesting answer might be, “Your brain has several tools for understanding reality. They are senses, memory, language, logic, and emotions. If you study how they work and their limitations, you’ll eventually be able to understand how your sense of “I” came into being as a self-aware entity. You could even get a bit Zen on them and say, “When you figure out who or what is saying ‘I’ or ‘me’ in your mind, you’ll know.”

I’ve been contemplating the nature of my mind because of the recent AI craze. Mainly, I’m trying to distinguish which part is what I call me. When I meditate, I can watch my thoughts appear out of nowhere. I see my thoughts separate from my sense of just being aware. Does that mean my thoughts aren’t mine?

Twice in my life, I have lost words and language. The first time was when I took a large dose of LSD. The second time was when I experienced a TIA. In each case, words just weren’t there. I just observed. I moved around and interacted with my environment, but I had no thoughts telling me what was happening. By the way, during those moments, I had no sense of “I” or self.

When I had the TIA, it was in the middle of the night. I woke because of a bright flash of light, like a lightning flash in my dream. I looked at my wife, but I didn’t know her name or how to talk to her. Instinctively, I went into the bathroom, shut the door, turned on the light, and sat on the commode. I just stared at my surroundings. After a lapse of time, which I can’t say for how long, the alphabet came to mind – and internally said to myself, A, B, C, etc. Then words like “towel” and “door” came to mind. Eventually, thoughts returned, and I felt normal. I went back to bed.

Later on, I wonder if the state of being without words was like how an animal perceives the world? This experience showed me that who I was wasn’t my thoughts. However, my mind is always active, generating thoughts. Where do they come from? It’s very hard to separate the being who observes and the being who thinks.

If I relax, I can turn off my thoughts for short periods. It’s somewhat like holding my breath; eventually, I have to let go, and my thoughts rush back. Sometimes I feel whatever generates my thoughts is like an LLM. But what prompts it to generate words?

When a thought asks: “What creates a thought?”  Is another brain function asking my internal LLM a question? Or, is it my LLM talking to itself? Or can my observing self ask questions? Yesterday, when this conundrum appeared in my mind, the question “What is air?” popped into my mind. After that, words like nitrogen, oxygen, and molecules began bubbling to the surface.

Thinking about thinking is a kind of metacognition. While meditating, I can distinguish two types of thinking. There is a slow thinker who uses few words, who can analyze what I’m experiencing. And there is a computer-fast thinker that shoots out words, ideas, and concepts far faster than I can consciously control. Reading Thinking, Fast and Slow by Daniel Kahneman confirms this observation. Is my “I” the slow thinker?

I don’t feel the observer has any control or input over thinking. I don’t think the observer that exists without language is my sense of I-ness. 

When I’m on the phone with a friend, who is talking? The inner LLM, or the slow thinker, which I call the Analyzer? I think the Analyzer triggers whole streams of words from my brain’s LLM that come far faster than my Analyzer thinks. 

This just occurred to me. Lately, my friends and I have often struggled to remember a word. Is the LLM forgetting, or the Analyzer? I know I could fall into dementia or have a major stroke and lose all ability to use language, yet I could continue to live for years. Can a stroke erase the analyzer? I think the TIA and LSD did temporarily.

In his book, The Mind is Flat, Nick Chater makes a case that our unconscious mind isn’t a deep, complex part of our being. He calls it flat and describes it as working like an LLM. He recounts case studies that suggest the unconscious mind is far simpler than psychologists have imagined.

Our thinking mind depends on the data it was trained on, which is another way of saying our thinking mind is like an LLM. If all a mind consumes is radical theology, then radical theology is all it knows. Does it really know anything? Or is it just regenerating ideas like an LLM?

Chater says our beliefs are illusions. Even before I read Chater, I had decided beliefs were delusions. If beliefs are a waste of time, what use is our thinking? We want to understand reality. We want to communicate with other people. But don’t words ultimately fail us?

Who is writing this essay? The Jim LLM, or the Jim Analyzer, or the Jim Observer?

As of now, I believe my I-ness comes from the Analyzer. I’m not positive. I know it can be destroyed like my memory, or my LLM thought processor. 

Does the observer only observe, or does it learn? Does it become a keener observer? Or does the LLM become smarter, and the observer just follows along like the wake of a boat? This line of thought makes me want to reread Allan Watts books on Zen Buddhism and Jerzy Kosinski’s novel Being There.

 What we know appears to come from our built-in LLM. It’s trained by life experiences, education, and sharing opinions. But how much does the Analyzer know? It can call bullshit on thoughts the LLM generates, but does it know anything? I feel our sense of ethics or morality belongs with the Analyzer. However, I don’t think it’s a deep thinker. It feels like it works on intuition. That suggests another processor working deep in the brain – a Large Emotion Model. Since I believe emotions come from biological processes, I can’t imagine AIs having LEMs.   

To answer: Where Do “I” Come From, the answer appears to be the Analyzer. If that’s true, when does it emerge in our development? Is it the processor in our brains that develops wisdom? If it is, then it learns slowly. Does this process develop in artificial intelligence?

If drugs, disease, health, and injury can hurt all these components of my personality, does anything survive death? Is there another component we call the soul? I have never been able to distinguish it in my meditations. And what about that aspect we call personality?

I have been reading many articles about people befriending AIs and even claiming to be in love with them. That suggests people react to the LLM function in others, whether AI or human. Is that our true personality?

I tend to believe that who we are is a gestalt of all our components. However, in this day and age, we emphasize the LLM features. And whether human or AI, that feature is only as good as its training.

Here is a Zen Koan for you. If a person or AI is racist, is it because of who they are or their training? Are we really the ideas we express?

JWH

Should We Accept/Reject AI?

by James Wallace Harris, 3/30/26

This morning, I listened to “The Hunt for Deepfakes” by Sarah Treleaven on Apple News+, from Maclean’s Magazine. Treleaven reported on a Toronto-area pharmacist who ran a deep fake porn site called MrDeepFakes. Don’t go looking for it; it’s been taken down. This site served up mostly AI-generated videos of famous movie stars having computer-generated sex, or ordinary women being degraded in AI-generated pornographic videos created for misogynistic revenge. This is just one example of AI being used for horrible reasons. My initial reaction was that we should ban all AI-generated content.

But last night I was admiring Reels on Facebook produced by Vintage Memories 66 that lovingly recreated videos of classic movie stars from the 1930s and 1940s. Because these actors and actresses are best known from black-and-white films, seeing their images in high-definition color is rewarding on various levels. The videos showed these long-dead people reincarnated. Is this a legitimate creative tool of AI that we should accept? It’s another kind of deep fake. I haven’t seen AI-generated porn, but if it’s as realistic as these videos, it could be psychologically disturbing.

I just finished reading and discussing If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky and Nate Soares. The book makes a great case that we should stop all work on AI now. If you don’t want to read the book, watch the video that makes the same case, also quite convincingly.

Even if AI doesn’t intentionally wipe out the human race, it will transform society in ways we can’t yet imagine. It’s already changed us significantly. Watch the film above; it dramatically illustrates how fast it could happen. Do we really want to be that changed, that transformed?

I love watching YouTube videos. I’m old, and I mostly stay home nowadays, so YouTube videos let me see the world. For example, I’m watching a woman who calls herself Itchy Boots ride a motorcycle across Mongolia. I admire creative people who come up with different ways to educate their viewers. The possibilities are endless.

Yet, a lot of content I see is AI slop. I don’t feel like I’m learning about reality when I see AI-generated content. I feel cheated. Then there are good documentaries about real history recreated with AI-generated visuals. I enjoyed learning from narration, but I’m offended by the visuals they show me that don’t match the words.

On their YouTube page, they inform us, “Written, produced, and edited by one person with the help of AI tools under KNOW MEDIA.” It is impressive that one person can compete with Ken Burns. I see that as a tremendous creative opportunity for people. They don’t say who this one person is, but it’s published under the Tech Now channel. I assume Know Media is this site, which appears to house many content creators.

AI is empowering such wannabe filmmakers. However, often their content annoys, insults, or repulses me. I hate artificial presenters. I hate artificial voices. I hate AI-generated images and videos that do not match what’s being described. I especially hate videos with obvious flaws, such as claiming to show China but obviously showing a Western country, misspelling words on the screen that the narrator is saying, showing people that obviously aren’t real people, etc. The list goes on and on.

If the AI video that went with the John Atanasoff documentary had looked real and accurate, I would have gladly accepted it. In other words, maybe I’m not protesting AI but bad AI.

We have to face the fact that AI will enhance the Seven Deadly Sins in all of us. But AI could supercharge the Seven Heavenly Virtues we should be pursuing. The trouble is, AI is too powerful. It’s like letting everyone own an atomic bomb. Are you willing to trust everyone?

I’ve been using AI to create header images for this blog. That’s because I have no artistic skills on my own. I used to just snag something from the internet, but I decided that wasn’t honest. But I’m not happy with the AI-generated headers either. I didn’t create them. Even when I like them and feel they’re creative, I’m leery of using those images. I’m trying to decide just how much I should use AI.

Sometimes I think I should reject AI completely. But doing searches on Google and Bing now returns AI content first. And it’s more useful than all those sites at the top of search returns that paid to be there. Do I want to return to libraries, card catalogs, and The Readers’ Guide to Periodical Literature?

In Dune, Frank Herbert had humanity reject AI. Could we do that? In many science fiction novels from the 1950s, writers imagined post-apocalyptic societies rejecting science and technology because people blamed the apocalypse on them. Do we have to wait until the apocalypse to make that decision?

Aren’t computer programs produced by Donald Knuth more creative than computer programs produced by Claude?

Notice that all the videos I presented used AI to a degree. This blog is probably published by varying levels of AI-assisted programming. Many people who read this post might have found it because of AI tracking of their reading habits. Rejecting AI could mean returning to technology that existed before the year 2000. What level of technology should we set that would make us the most human? I could make a case that people seemed nicer before the graphical interface.

Fueling my formative years in the 1960s and 1970s by reading science fiction, I was anxious for the future to arrive. I wanted to live in a world of intelligent robots, artificial intelligence, and space colonies. Now, I kind of wish I were back in the 1960s and 1970s.

JWH

The Coming of Household Robots

by James Wallace Harris, 3/26/26

By 2030, I expect to see robots for sale for $15,000 that will be as flexible as a Tai Chi master, stronger than an Olympian athlete, handier than a master plumber, and smarter than a college professor. These robots won’t have super-intelligence or self-awareness, because selling such beings would be unethical. Nor will they look human, although there will be another industry working to create human-looking androids that some people will buy for sex, but I don’t want to deal with that in this essay. I will predict that we’ll never create an android that can pass for human.

Will this technology disrupt society, blow up the economy, and derange human psychology? Can we integrate robots into our lives without destroying them? Science fiction has imagined possibilities since the 19th century, and fantasy, for even longer. Let’s examine some of the situations in which we might use a robot and extrapolate from that.

Already, hundreds of millions of people use AI today, so they can easily imagine conversations with an intelligent robot. And there are plenty of videos demonstrating the evolving physical abilities of robots. Recalling AI and robot progress from 2023 to 2026, it’s not difficult to imagine the progress this technology will achieve in the years 2026 to 2030.

The first thing we should do is visualize the robot you will buy. How tall do you want it to stand? Would a robot taller than you freak you out? Should the head have two eyes and a mouth, or would you be comfortable with a head with six eyes and no visible mouth? Should the body be humanoid? If so, should it wear clothes? If not, are there forms better suited for maximal utility? Do you want your robot to sit on the couch with you, or would you prefer it to stand?

Your wants will decide these choices. If you picture your robot kicking back in a La-Z-Boy and watching television with you, you’ll probably want it to be humanoid. If you buy a robot just for housework, yardwork, and home healthcare, you might purchase a robot that’s shaped easily to do the most chores. Right now, we buy robots to do individual tasks like vacuuming floors or mowing the lawn. But ultimately, wouldn’t it be more practical to have one robot that does everything rather than dozens of robots that do one thing?

Because so many millions use AI for conversation, I will assume faces will be important. Roboticists have experimented with giving robots facial expressions. And I’ve noticed that some robots in movies have body language, like C-3PO. Robots might not need to look very human to feel human.

I’ve been asking my friends about robots and their uses. One woman said she’s like a robot to share all her favorite activities and hobbies. She also said she sometimes wanted another husband, but ultimately decided they were too much trouble. Does that suggest we’re finding other people too much trouble, and we’d prefer machines?

Since most of my friends are around my age, their answers were much like my reasons. My wife can’t do much physically at all, and I’m getting less and less capable. I’d want a robot with the strength and stamina I had in my twenties to help me work around the house and in the yard. And as Susan and I got older, I’d want robots to be our live-in caretakers.

However, what if everyone did this? How many people would become unemployed? What happens to maids, gardeners, handymen, painters, car detailers, healthcare workers, and the other people we pay to come to our house? What if general-purpose household robots were also skilled at electrical work, plumbing, and maintaining HVACs?

If businesses replace white-collar workers with robots, and manufacturers replace factory workers with robots, and store owners replace retail workers with robots, what happens to the economy?

Some people worry that AI will become super-intelligent and want to wipe out humanity. That’s rather science fictional. But capitalists replacing labor with robots is all too real. Things are so complicated. How many people really want to wipe old people’s butts? Wouldn’t the wipers and the people being wiped prefer robots to have that job?

What jobs do humans want to keep, and which ones would they want to give to robots? And if you had no income, which jobs would you be willing to take?

I see owning a robot in old age as a prosthetic for my weakening body. It’s not to put someone else out of work, but to let me keep working on my own. But what if I were younger, and considering a robot to do housework? Right now, housework is good exercise for my mind and body. I keep telling my wife she should do housework to keep her from becoming an invalid. I tell her she shouldn’t let me hog all the healthy benefits of housework. She doesn’t buy that. 

Susan wants to hire a maid or cleaning service. Many of our friends have. I reply that as long as we’re strong enough to do housework ourselves, we should. But what if most people could afford robots to serve them? Would many people love living the upstairs lifestyle we see in Downton Abbey? Won’t that make us lazier? Will we become like astronauts, vigorously working out in the gym for two hours a day to make up for twenty-two hours of weightlessness?

Many people are questioning what social media, smartphones, and the Internet have done to society. Will AI and robots undermine human nature even more? It’s so hard to answer these questions. If millions of lonely people find comfort with AI and robots, is that bad? The obvious solution would be for half the lonely to meet up with the other half. 

Since those people aren’t doing that, does that suggest that something else is wrong? Could it be that some people prefer machines to other people? If so, the market for robots will be tremendous. So, even if we think AI and robots are bad for society, businesses will sell them, and we’ll buy them.

I should be out working in my yard. It needs a lot of work. But I rate the creative activity of writing this blog higher, so I’m skipping yardwork this morning. 

I can easily visualize a robot working outside, landscaping my yard, because of all the Ray Bradbury and Clifford Simak science fiction stories I’ve read. I don’t really like working in the yard, but I do wish it were nicer. I’d like to redesign my yard to maximize its benefits for insects, birds, and other wildlife. I wish my backyard were all wildflowers.

The idea of looking out the window that’s just behind my computer monitor and seeing a robot crafting a nature preserve for living creatures would be immensely pleasing. I can even imagine going for a walk in my neighborhood and seeing both people and robots working together and separately as I pass each yard. I even picture humans and robots walking dogs, stopping together to chat and let their pups sniff each other. In this daydream, I also see robots pushing old people in wheelchairs and babies in strollers. I also imagine coming home and finding Susan directing a robot to repaint the living room.

This is an idyllic fantasy. But is it one we really want?

JWH   

Are You Preparing for the 2030s?

by James Wallace Harris, 3/23/26

The 2030s will cover ages 78 to 88 for me. If you’re a teenager, the 2030s will cover your college and job-seeking years. If you’re in your twenties, the 2030s could be the years you get married and start a family. If I hadn’t started seriously saving in my fifties, I couldn’t have retired in the 2010s.

It is impossible to know the future. Even speculations based on extrapolation about present trends are nearly always wrong. However, it doesn’t hurt to be prepared like good Boy Scouts. In fact, I’ve heard luck defined as merely proper preparation.

For those of us who have orbited the Sun dozens of times, we have lived through constant change. We’ve learned that society never stays the same. People and their behaviors never seem to change, but relentless change churns our lives. Just observe the 14 decades of change in the films presented by TCM.

The iPhone was announced on January 9, 2007. I don’t even think Steve Jobs had any idea what it would do to the world in the 2010s and 2020s. Nor did we imagine what Amazon (1994), Facebook (2004), Twitter (2006), Instagram (2010), TikTok (2016), and ChatGPT (2022) would do to society. And how many folks expected the end of Globalism and the return to nationalism in the 2010s?

I believe two technologies will transform society in the 2030s: AI and intelligent general-purpose robots. Ever since the Industrial Age began, Capitalists have resented the cost of Labor. But the reason we’ve always needed capitalism is that it gives most citizens employment. Capital and Labor were always tied together, but AI and robots could disconnect the relationship. What will that mean in the 2030s?

I know AI and robots will transform society in ways that will be judged evil in the coming decade. However, I will consider buying robotic caretakers for my wife and me in the 2030s. For a childless couple in their 80s, robots might be the cheapest and most advantageous solution. We will all find reasons (excuses?) to go along.

Humanity should decide to halt all development in AI and robotics right now. But we won’t. Greed is a dependable predictor of human behavior, and the millionaires and billionaires who see trillions in AI aren’t going to allow their greed to be curtailed.

It’s the same way the owners of trillions in fossil fuels haven’t let the threat of climate change interfere with their greed. But we can’t put all the blame on the rich, because we’ve all continued to consume fossil fuels. The weakness of human nature is also a reliable predictor of the future.

Societies only change during violent upheavals, such as the American, French, and Russian revolutions, the Civil War, WWI, and WWII. Moral upheavals, such as feminism, civil rights, and LGBTQIA+ rights, have been less effective at creating long-term, permanent change. Even the great upheaval created by the Enlightenment might not be permanent.

When thinking about what we might experience in the 2030s, we need to consider Black Swans. Donald Trump was a political black swan in 2016. There’s always a chance we’ll elect an Abraham Lincoln black swan in 2028 that will pull the nation back together. But black swans can’t be predicted.

Predictions for artificial intelligence range from the extinction of humanity to an age of unlimited abundance. Because technology has been a reliable agent of change for many decades, it’s probably safe to assume that trend will continue.

For example, if battery technology improves as much as companies working on battery science promise, expect a huge transformation. If just the Donut Labs battery turns out to be real, no one will want fossil fuels because renewable energy will be so dramatically cheaper.

If Elon Musk keeps his promise to manufacture millions of general-purpose robots with AI-powered minds, what will that do to human employment? Of course, business owners will buy them, but what about you? Could you resist owning your own Jeeves? How many fans of Downton Abbey and The Gilded Age will try to create a cybernetic service class? If you asked people in the 1950s if they’d ever want a computer in their homes, 99.9999% of them would have said no.

We can’t imagine black swans; that’s part of their definition. But most of the spectacular changes in society have come from technology that began its existence decades before transforming society. To imagine the 2030s, look at everything discovered in the last two decades.

As a science fiction fan, this will sound odd, but I think science fiction is a poor weathervane for the future. I believe the best bellwethers for the next decade are always revealed in the current decade.

JWH