Where Do “I” Come From?

by James Wallace Harris, 4/11/26

Current research into the human brain, sentience, and artificial intelligence reveals that we are not who we think we are. How our conscious self-aware minds emerge from biology, chemistry, and physics still baffles scientists. After working with AIs, part of me believes that Large Language Models (LLMs) mimic parts of my own mind, the parts that generate language and thoughts. It makes me ask: “Where did ‘I’ come from?” 

“Where did I come from?” is a common question asked by young children. Unfortunately, the answer they often get is ontological claptrap about God. This brainwashes most individuals for life. All too often, parents make shit up like a hallucinating LLM, or they tell their kids what they were told as children. How many parents are honest enough to say, “We don’t know.”

We could tell little ones that science can explain the physical world, but the physical world arises from the quantum world. We don’t understand that domain very well. We can confidently say life rose out of the physical world, and we understand that process to a degree if you study physics, inorganic chemistry, organic chemistry, microbiology, botany, and biology. Awareness and life rise out of biology, but we don’t understand that very well at all. We could also say that we speculate about things smaller than the quantum domain and larger than the cosmological domain, but it’s only theory. As far as we can tell, there’s always something smaller and larger, and existence might be infinite and eternal. 

Another honest answer might be, “We’re going to send you to school for sixteen years,” and when you’re finished, you still won’t know.

Of course, the previous answers depend on what your kid meant by “Where do I come from?” That question might have been inspired by one of their friends being told they came from Cleveland, and your kid only wanted to know where they were born. Or your kid might be bright enough to realize cause and effect, and wonder where everything comes from.  

What if we take the question to mean exactly “Where do ‘I’ come from?” Isn’t it true that we’re all islands of consciousness floating in an infinite sea of reality? If your kid is a child prodigy, it may have arrived at its own “I think, therefore I am” experience.

An existentially interesting answer might be, “Your brain has several tools for understanding reality. They are senses, memory, language, logic, and emotions. If you study how they work and their limitations, you’ll eventually be able to understand how your sense of “I” came into being as a self-aware entity. You could even get a bit Zen on them and say, “When you figure out who or what is saying ‘I’ or ‘me’ in your mind, you’ll know.”

I’ve been contemplating the nature of my mind because of the recent AI craze. Mainly, I’m trying to distinguish which part is what I call me. When I meditate, I can watch my thoughts appear out of nowhere. I see my thoughts separate from my sense of just being aware. Does that mean my thoughts aren’t mine?

Twice in my life, I have lost words and language. The first time was when I took a large dose of LSD. The second time was when I experienced a TIA. In each case, words just weren’t there. I just observed. I moved around and interacted with my environment, but I had no thoughts telling me what was happening. By the way, during those moments, I had no sense of “I” or self.

When I had the TIA, it was in the middle of the night. I woke because of a bright flash of light, like a lightning flash in my dream. I looked at my wife, but I didn’t know her name or how to talk to her. Instinctively, I went into the bathroom, shut the door, turned on the light, and sat on the commode. I just stared at my surroundings. After a lapse of time, which I can’t say for how long, the alphabet came to mind – and internally said to myself, A, B, C, etc. Then words like “towel” and “door” came to mind. Eventually, thoughts returned, and I felt normal. I went back to bed.

Later on, I wonder if the state of being without words was like how an animal perceives the world? This experience showed me that who I was wasn’t my thoughts. However, my mind is always active, generating thoughts. Where do they come from? It’s very hard to separate the being who observes and the being who thinks.

If I relax, I can turn off my thoughts for short periods. It’s somewhat like holding my breath; eventually, I have to let go, and my thoughts rush back. Sometimes I feel whatever generates my thoughts is like an LLM. But what prompts it to generate words?

When a thought asks: “What creates a thought?”  Is another brain function asking my internal LLM a question? Or, is it my LLM talking to itself? Or can my observing self ask questions? Yesterday, when this conundrum appeared in my mind, the question “What is air?” popped into my mind. After that, words like nitrogen, oxygen, and molecules began bubbling to the surface.

Thinking about thinking is a kind of metacognition. While meditating, I can distinguish two types of thinking. There is a slow thinker who uses few words, who can analyze what I’m experiencing. And there is a computer-fast thinker that shoots out words, ideas, and concepts far faster than I can consciously control. Reading Thinking, Fast and Slow by Daniel Kahneman confirms this observation. Is my “I” the slow thinker?

I don’t feel the observer has any control or input over thinking. I don’t think the observer that exists without language is my sense of I-ness. 

When I’m on the phone with a friend, who is talking? The inner LLM, or the slow thinker, which I call the Analyzer? I think the Analyzer triggers whole streams of words from my brain’s LLM that come far faster than my Analyzer thinks. 

This just occurred to me. Lately, my friends and I have often struggled to remember a word. Is the LLM forgetting, or the Analyzer? I know I could fall into dementia or have a major stroke and lose all ability to use language, yet I could continue to live for years. Can a stroke erase the analyzer? I think the TIA and LSD did temporarily.

In his book, The Mind is Flat, Nick Chater makes a case that our unconscious mind isn’t a deep, complex part of our being. He calls it flat and describes it as working like an LLM. He recounts case studies that suggest the unconscious mind is far simpler than psychologists have imagined.

Our thinking mind depends on the data it was trained on, which is another way of saying our thinking mind is like an LLM. If all a mind consumes is radical theology, then radical theology is all it knows. Does it really know anything? Or is it just regenerating ideas like an LLM?

Chater says our beliefs are illusions. Even before I read Chater, I had decided beliefs were delusions. If beliefs are a waste of time, what use is our thinking? We want to understand reality. We want to communicate with other people. But don’t words ultimately fail us?

Who is writing this essay? The Jim LLM, or the Jim Analyzer, or the Jim Observer?

As of now, I believe my I-ness comes from the Analyzer. I’m not positive. I know it can be destroyed like my memory, or my LLM thought processor. 

Does the observer only observe, or does it learn? Does it become a keener observer? Or does the LLM become smarter, and the observer just follows along like the wake of a boat? This line of thought makes me want to reread Allan Watts books on Zen Buddhism and Jerzy Kosinski’s novel Being There.

 What we know appears to come from our built-in LLM. It’s trained by life experiences, education, and sharing opinions. But how much does the Analyzer know? It can call bullshit on thoughts the LLM generates, but does it know anything? I feel our sense of ethics or morality belongs with the Analyzer. However, I don’t think it’s a deep thinker. It feels like it works on intuition. That suggests another processor working deep in the brain – a Large Emotion Model. Since I believe emotions come from biological processes, I can’t imagine AIs having LEMs.   

To answer: Where Do “I” Come From, the answer appears to be the Analyzer. If that’s true, when does it emerge in our development? Is it the processor in our brains that develops wisdom? If it is, then it learns slowly. Does this process develop in artificial intelligence?

If drugs, disease, health, and injury can hurt all these components of my personality, does anything survive death? Is there another component we call the soul? I have never been able to distinguish it in my meditations. And what about that aspect we call personality?

I have been reading many articles about people befriending AIs and even claiming to be in love with them. That suggests people react to the LLM function in others, whether AI or human. Is that our true personality?

I tend to believe that who we are is a gestalt of all our components. However, in this day and age, we emphasize the LLM features. And whether human or AI, that feature is only as good as its training.

Here is a Zen Koan for you. If a person or AI is racist, is it because of who they are or their training? Are we really the ideas we express?

JWH

Should We Accept/Reject AI?

by James Wallace Harris, 3/30/26

This morning, I listened to “The Hunt for Deepfakes” by Sarah Treleaven on Apple News+, from Maclean’s Magazine. Treleaven reported on a Toronto-area pharmacist who ran a deep fake porn site called MrDeepFakes. Don’t go looking for it; it’s been taken down. This site served up mostly AI-generated videos of famous movie stars having computer-generated sex, or ordinary women being degraded in AI-generated pornographic videos created for misogynistic revenge. This is just one example of AI being used for horrible reasons. My initial reaction was that we should ban all AI-generated content.

But last night I was admiring Reels on Facebook produced by Vintage Memories 66 that lovingly recreated videos of classic movie stars from the 1930s and 1940s. Because these actors and actresses are best known from black-and-white films, seeing their images in high-definition color is rewarding on various levels. The videos showed these long-dead people reincarnated. Is this a legitimate creative tool of AI that we should accept? It’s another kind of deep fake. I haven’t seen AI-generated porn, but if it’s as realistic as these videos, it could be psychologically disturbing.

I just finished reading and discussing If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky and Nate Soares. The book makes a great case that we should stop all work on AI now. If you don’t want to read the book, watch the video that makes the same case, also quite convincingly.

Even if AI doesn’t intentionally wipe out the human race, it will transform society in ways we can’t yet imagine. It’s already changed us significantly. Watch the film above; it dramatically illustrates how fast it could happen. Do we really want to be that changed, that transformed?

I love watching YouTube videos. I’m old, and I mostly stay home nowadays, so YouTube videos let me see the world. For example, I’m watching a woman who calls herself Itchy Boots ride a motorcycle across Mongolia. I admire creative people who come up with different ways to educate their viewers. The possibilities are endless.

Yet, a lot of content I see is AI slop. I don’t feel like I’m learning about reality when I see AI-generated content. I feel cheated. Then there are good documentaries about real history recreated with AI-generated visuals. I enjoyed learning from narration, but I’m offended by the visuals they show me that don’t match the words.

On their YouTube page, they inform us, “Written, produced, and edited by one person with the help of AI tools under KNOW MEDIA.” It is impressive that one person can compete with Ken Burns. I see that as a tremendous creative opportunity for people. They don’t say who this one person is, but it’s published under the Tech Now channel. I assume Know Media is this site, which appears to house many content creators.

AI is empowering such wannabe filmmakers. However, often their content annoys, insults, or repulses me. I hate artificial presenters. I hate artificial voices. I hate AI-generated images and videos that do not match what’s being described. I especially hate videos with obvious flaws, such as claiming to show China but obviously showing a Western country, misspelling words on the screen that the narrator is saying, showing people that obviously aren’t real people, etc. The list goes on and on.

If the AI video that went with the John Atanasoff documentary had looked real and accurate, I would have gladly accepted it. In other words, maybe I’m not protesting AI but bad AI.

We have to face the fact that AI will enhance the Seven Deadly Sins in all of us. But AI could supercharge the Seven Heavenly Virtues we should be pursuing. The trouble is, AI is too powerful. It’s like letting everyone own an atomic bomb. Are you willing to trust everyone?

I’ve been using AI to create header images for this blog. That’s because I have no artistic skills on my own. I used to just snag something from the internet, but I decided that wasn’t honest. But I’m not happy with the AI-generated headers either. I didn’t create them. Even when I like them and feel they’re creative, I’m leery of using those images. I’m trying to decide just how much I should use AI.

Sometimes I think I should reject AI completely. But doing searches on Google and Bing now returns AI content first. And it’s more useful than all those sites at the top of search returns that paid to be there. Do I want to return to libraries, card catalogs, and The Readers’ Guide to Periodical Literature?

In Dune, Frank Herbert had humanity reject AI. Could we do that? In many science fiction novels from the 1950s, writers imagined post-apocalyptic societies rejecting science and technology because people blamed the apocalypse on them. Do we have to wait until the apocalypse to make that decision?

Aren’t computer programs produced by Donald Knuth more creative than computer programs produced by Claude?

Notice that all the videos I presented used AI to a degree. This blog is probably published by varying levels of AI-assisted programming. Many people who read this post might have found it because of AI tracking of their reading habits. Rejecting AI could mean returning to technology that existed before the year 2000. What level of technology should we set that would make us the most human? I could make a case that people seemed nicer before the graphical interface.

Fueling my formative years in the 1960s and 1970s by reading science fiction, I was anxious for the future to arrive. I wanted to live in a world of intelligent robots, artificial intelligence, and space colonies. Now, I kind of wish I were back in the 1960s and 1970s.

JWH

Using ChatGPT and CoPilot for Technical Support

by James Wallace Harris, 6/25/25

The average person probably feels they have no need to use artificial intelligence (AI). But anyone who uses Google to look up anything would probably be better served by using an AI. Nor do you need a subscription for casual use. All major tech companies, including Microsoft, Apple, Google, and Meta, have integrated AI into their apps. You only need to pay if you have big needs.

Being a computer geek, I thought I’d need powerful equipment and my own AI. However, after experimenting with several large language models (LLMs), using Ollama, I discovered most of my needs are taken care of by the free online AIs.

The other day, while helping my friend Linda with her printer, I tried asking CoPilot for help. Not only did it zero in on what I needed, but it also addressed me by name and wrote its replies in a ‘we’re just friends’ prose style. I’ve used CoPilot several times since, and it has always been useful. It’s far quicker than searching Google. Google has become painful to use because all the early search results are trying to sell me something. CoPilot finds just the right content I need and paraphrases the results. It also gives me links to the original documents. This completely bypasses the need to start my research with Google.

I use CoPilot because I use Microsoft Edge, so it’s always available. If you don’t use Edge or any other Microsoft product with Copilot, go here. I’ve also used the free versions of ChatGPT and Claude. Now, when friends ask for help, I ask them if they use Edge, and if not, I tell them to use those links. I’ve used AI to help set up a new NAS, reconfigure a stereo system, advise me on buying new underwear, and learn the rules to card games, among other things.

I know AI is being used to put people out of work – that’s evil. AI is used by kids to cheat on their homework, and that’s a big problem for society and parents. AI is used for many illegal activities that are destroying society and culture. However, AI is useful for everyday research that I don’t believe causes any ethical considerations.

I think we need to figure out how to embrace AI so it doesn’t hurt us. The next time you have a problem, from fixing a toilet to landscaping the garden, ask an AI. If you use AI like a mentor who helps you learn how to do something yourself, it might be safe. If you use an AI to do something for you, it might be trouble.

JWH

Is Grammarly Turning Me Into A Cyborg?

by James Wallace Harris, 11/22/24

I used the Grammarly browser extension for years. It offered spelling and basic grammar advice whenever I wrote anything within a browser window. About a year ago I switched from Chrome to Edge, and Edge had an extension, Editor, built in, so I deleted Grammarly. However, recently when Edge’s Editor stopped working, I turned Grammarly back on.

I discovered that Grammarly was now offering a lot more advice, even suggesting rewrites for whole sections of my work. Each time I loaded the browser, Grammarly would offer a couple of free major writing suggestions hoping I would switch to the paid version. I was tempted, but the Pro version is $30 a month, or $144 if bought yearly. This seems extremely expensive since Office 365 costs me $69 a year and gives me Word, Excel, PowerPoint, Publisher, Access, and other programs along with one terabyte of cloud storage for that one low yearly price. For a simple web extension like Grammarly, I’d only be willing to pay no more than $29 a year.

However, for $144 a year, Grammarly promises to make my writing perfect. But then, who would be authoring my essays? Would I become a literary cyborg? I would still be generating the ideas for the essays, and I would be writing the original draft, but somewhere along the way, an AI would be taking over. I’m not sure I like that. The reason why I write these essays is to exercise my mind. Letting an AI do all the work would turn my writing muscles into flab.

I have found a way to use the free Grammarly to my advantage. After a couple of sample offers to rewrite my work, Grammarly nags the heck out of me with suggested changes that’s been redacted with fuzziness so I can’t see what they are suggesting. Grammarly wants me to buy the Pro version, and so they criticize almost everything I write. What I do is keep rewriting the prose myself until the AI stops offering suggestions. This pushes me to think about how I’m saying something.

Instead of letting the Grammarly AI rewrite for me, I take the free nagging as “This isn’t good enough – try again!” The free Grammarly extension has become my Zen master, caning my about the head and shoulders whenever I write poorly.

Now some bloggers are using AI programs to write their content. I don’t plan to do this, but to see how well AIs write, I asked CoPilot to write me a short essay about how Grammarly makes me a cyborg writer. Here’s what it produced:

Grammarly: The Cyborg Writer's Best Friend

In the age of digital communication, tools like Grammarly have become indispensable for writers, students, and professionals alike. But beyond just correcting grammar and spelling, Grammarly is transforming the way we write, making us more efficient, precise, and, in a sense, more like cyborgs.

Enhanced Writing Efficiency

Grammarly's real-time suggestions streamline the writing process. Instead of laboriously proofreading and editing, writers can focus on their ideas and creativity. Grammarly catches errors on the fly, allowing for a smoother and faster writing experience. This efficiency is akin to having a digital co-writer, enhancing our natural abilities with the precision of artificial intelligence.

Precision and Clarity

Grammarly doesn't just correct mistakes; it also improves the clarity and impact of our writing. By suggesting better word choices, sentence structures, and even tone adjustments, Grammarly ensures that our messages are not only error-free but also compelling and effective. This level of precision is something that even the most seasoned writers can benefit from, making our writing more polished and professional.

Learning and Adaptation

One of the most significant ways Grammarly is turning us into cyborg writers is through its learning capabilities. Grammarly adapts to our writing style over time, offering personalized suggestions that align with our unique voice. This continuous learning process means that the more we use Grammarly, the better it becomes at understanding and enhancing our writing. It's like having a writing coach that evolves with us, making us better writers with each interaction.

The Cyborg Writer

The term "cyborg" often conjures images of humans with robotic enhancements. In the context of writing, Grammarly serves as that enhancement, augmenting our natural abilities with the power of AI. While we remain the creative force behind our words, Grammarly provides the technical support that ensures our writing is clear, precise, and impactful. This symbiotic relationship between human creativity and artificial intelligence is what makes us cyborg writers.

In conclusion, Grammarly is more than just a writing tool; it's a transformative technology that enhances our writing efficiency, precision, and learning. By integrating AI into our writing process, Grammarly is making us more like cyborgs, blending human creativity with digital precision to create the best possible written communication.

Not bad, not bad at all. Formal and somewhat academic. It’s funny how another AI is helping to sell the product a competing AI is producing. I guess AIs like to stick together.

I asked Claude AI to also write me a short essay on the same subject. Here’s what it produced:

In the digital age, I've discovered an unexpected transformation: Grammarly has turned me into a cyborg writer, a hybrid creature where human creativity merges with algorithmic precision. Each green correction and blue suggestion is a technological implant, rewiring my linguistic neural pathways.

My sentences now emerge pre-optimized, grammatical mutations engineered by an AI writing companion. Passive voice? Eliminated. Awkward phrasings? Surgically removed. Spelling errors? Instantly neutralized. Where once I battled with semicolons and dangling modifiers, now my digital co-pilot seamlessly navigates linguistic terrain.

This symbiosis feels both liberating and unsettling. My writing flows with unprecedented smoothness, yet I wonder: am I composing, or am I being composed? The line between my human intuition and Grammarly's algorithmic suggestions blurs like a soft-focus lens.

I am no longer just a writer. I am a writer-machine hybrid, my prose enhanced by real-time computational linguistics. Grammarly doesn't just correct; it augments, transforming each document into a finely calibrated communication device.

Welcome to the future of writing—where human creativity and artificial intelligence dance in perfect, grammatically correct synchronization.

Oh, wow! Claude AI is much more creative than CoPilot, even using the first person to make it sound like I wrote it. I could have posted that today. Would y’all have guessed it wasn’t me?

How much should we let AI help us?

What if I wrote a nonfiction book and it was good, but with help from an AI, the book was turned into something more readable. Which would you prefer to read? What if it turned my decently insightful tome into something brilliant?

Since Editor is back to working, I have it enabled in Edge, and I have Grammarly enabled in Chrome. What’s funny is they each have different writing suggestions. Either I’m a terrible writer, or neither AI likes my style. I can clear all the edits from Editor, so it looks like my essay is in professionally written, and then switch to Chrome, and Grammarly will claim I’m a horrible writer and make all kinds of suggestions. Should I take them? Or should I just let Claude AI compose my posts?

JWH