Past-Present-Future As It Relates to Fiction-Nonfiction-Fantasy-SF

by James Wallace Harris, 12/12/25

I’ve been contemplating how robot minds could succeed at explaining reality if they didn’t suffer the errors and hallucinations that current AIs do. Current AI minds evolve from training on massive amounts of words and images created by humans stored as digital files. Computer programs can’t tell fiction from fact based on our language. It’s no wonder they hallucinate. And like humans, they feel they must always have an answer, even if it’s wrong.

What if robots were trained on what they see with their own senses without using human language? Would robots develop their own language that described reality with greater accuracy than humans do with our languages?

Animals interact successfully with reality without language. But we doubt they are sentient in the way we are. But just how good is our awareness of reality if we constantly distort it with hallucinations and delusions? What if robots could develop consciousness that is more accurately self-aware of reality?

Even though we feel like a being inside a body, peering out at reality with five senses, we know that’s not true. Our senses recreate a model of reality that we experience. We enhance that experience with language. However, language is the source of all our delusions and hallucinations.

The primary illusion we all experience is time. We think there is a past, present, and future. There is only now. We remember what was, and imagine what will be, but we do that with language. Unfortunately, language is limited, misleading, and confusing.

Take, for instance, events in the New Testament. Thousands, if not millions, of books have been written on specific events that happened over two thousand years ago. It’s endless speculation trying to describe what happened in a now that no longer exists. Even describing an event that occurred just one year ago is impossible to recreate in words. Yet, we never stop trying.

To compound our delusions is fiction. We love fiction. Most of us spend hours a day consuming fiction—novels, television shows, movies, video games, plays, comics, songs, poetry, manga, fake news, lies, etc. Often, fiction is about recreating past events. Because we can’t accurately describe the past, we constantly create new hallucinations about it.

Then there is fantasy and science fiction. More and more, we love to create stories based on imagination and speculation. Fantasy exists outside of time and space, while science fiction attempts to imagine what the future might be like based on extrapolation and speculation.

My guess is that any robot (or being) that perceives reality without delusions will not use language and have a very different concept of time. Is that even possible? We know animals succeed at this, but we doubt how conscious they are of reality.

Because robots will have senses that take in digital data, they could use playback to replace language. Instead of one robot communicating to another robot, “I saw a rabbit,” they could just transmit a recording of what they saw. Like humans, robots will have to model reality in their heads. Their umwelt will create a sensorium they interact with. Their perception of now, like ours, will be slightly delayed.

However, they could recreate the past by playing a recording that filled their sensorium with old data recordings. The conscious experience would be indistinguishable from using current data. And if they wanted, they could generate data that speculated on the future.

Evidently, all beings, biological or cybernetic, must experience reality as a recreation in their minds. In other words, no entity sees reality directly. We all interact with it in a recreation.

Looking at things this way makes me wonder about consuming fiction. We’re already two layers deep in artificial reality. The first is our sensorium/umwelt, which we feel is reality. And the second is language, which we think explains reality, but doesn’t. Fiction just adds another layer of delusion. Mimetic fiction tries to describe reality, but fantasy and science fiction add yet another layer of delusion.

Humans who practice Zen Buddhism try to tune out all the illusions. However, they talk about a higher state of consciousness called enlightenment. Is that just looking at reality without delusion, or is it a new way of perceiving reality?

Humans claim we are the crown of creation because our minds elevate us over the animals, but is intelligence or consciousness really superior?

We apparently exist in a reality that is constantly evolving. Will consciousness be something reality tries and then abandons? Will robots with artificial intelligence become the next stage in this evolutionary process?

If we’re a failure, why copy us? Shouldn’t we build robots that are superior to us? Right now, AI is created by modeling the processes of our brains. Maybe we should rethink that. But if we build robots that have a higher state of consciousness, couldn’t we also reengineer our brains and create Human Mind 2.0?

What would that involve? We’d have to overcome the limitations of language. We’d also have to find ways to eliminate delusions and hallucinations. Can we consciously choose to do those things?

JWH

Are Podcasts Wasting Our Time?

by James Wallace Harris, 11/16/25

While listening to the Radio Atlantic podcast, “What If AI Is a Bubble?,” a conversation between host Hanna Rosin and guest Charlie Warzel, I kept thinking I had heard this information before. I checked and found that I had read “Here’s How the AI Crash Happens” by Matteo Wong and Charlie Warzel, which Rosin had mentioned in her introduction.

Over the past year, I’ve been paying attention to how podcasts differ from long-form journalism. I’ve become disappointed with talking heads. I know podcasts are popular now, and I can understand their appeal. But I no longer have the patience for long chats, especially ones that spend too much time not covering the topic. All too often, podcasts take up excessive time for the amount of real information they cover.

What I’ve noticed is that the information density between podcasts and long-form journalism is very different. Here’s a quote, five paragraphs from the podcast:

WarzelThere’s a recent McKinsey report that’s been sort of passed around in these spheres where people are talking about this that said 80 percent of the companies they surveyed that were using AI discovered that the technology had no real—they said “significant”—impact on their bottom line, right?

So there’s this notion that these tools are not yet, at least as they exist now, as transformative as people are saying—and especially as transformative for productivity and efficiency and the stuff that leads to higher revenues. But there’s also these other reasons.

The AI boom, in a lot of ways, is a data-center boom. For this technology to grow, for it to get more powerful, for it to serve people better, it needs to have these data centers, which help the large language models process faster, which help them train better. And these data centers are these big warehouses that have to be built, right? There’s tons of square footage. They take a lot of electricity to run.

But one of the problems is with this is it’s incredibly money-intensive to build these, right? They’re spending tons of money to build out these data centers. So there’s this notion that there’s never enough, right? We’re going to need to keep building data centers. We’re going to need to increase the amount of power, right? And so what you have, basically, is this really interesting infrastructure problem, on top of what we’re thinking of as a technological problem.

And that’s a bit of the reason why people are concerned about the bubble, because it’s not just like we need a bunch of smart people in a room to push the boundaries of this technology, or we need to put a lot of money into software development. This is almost like reverse terraforming the Earth. We need to blanket the Earth in these data centers in order to make this go.

Contrast that with the opening five paragraphs of the article:

The AI boom is visible from orbit. Satellite photos of New Carlisle, Indiana, show greenish splotches of farmland transformed into unmistakable industrial parks in less than a year’s time. There are seven rectangular data centers there, with 23 more on the way.

Inside each of these buildings, endless rows of fridge-size containers of computer chips wheeze and grunt as they perform mathematical operations at an unfathomable scale. The buildings belong to Amazon and are being used by Anthropic, a leading AI firm, to train and run its models. According to one estimate, this data-center campus, far from complete, already demands more than 500 megawatts of electricity to power these calculations—as much as hundreds of thousands of American homes. When all the data centers in New Carlisle are built, they will demand more power than two Atlantas.

The amount of energy and money being poured into AI is breathtaking. Global spending on the technology is projected to hit $375 billion by the end of the year and half a trillion dollars in 2026. Three-quarters of gains in the S&P 500 since the launch of ChatGPT came from AI-related stocks; the value of every publicly traded company has, in a sense, been buoyed by an AI-driven bull market. To cement the point, Nvidia, a maker of the advanced computer chips underlying the AI boom, yesterday became the first company in history to be worth $5 trillion.

Here’s another way of thinking about the transformation under way: Multiplying Ford’s current market cap 94 times over wouldn’t quite get you to Nvidia’s. Yet 20 years ago, Ford was worth nearly triple what Nvidia was. Much like how Saudi Arabia is a petrostate, the U.S. is a burgeoning AI state—and, in particular, an Nvidia-state. The number keeps going up, which has a buoying effect on markets that is, in the short term, good. But every good earnings report further entrenches Nvidia as a precariously placed, load-bearing piece of the global economy.

America appears to be, at the moment, in a sort of benevolent hostage situation. AI-related spending now contributes more to the nation’s GDP growth than all consumer spending combined, and by another calculation, those AI expenditures accounted for 92 percent of GDP growth during the first half of 2025. Since the launch of ChatGPT, in late 2022, the tech industry has gone from making up 22 percent of the value in the S&P 500 to roughly one-third. Just yesterday, Meta, Microsoft, and Alphabet all reported substantial quarterly-revenue growth, and Reuters reported that OpenAI is planning to go public perhaps as soon as next year at a value of up to $1 trillion—which would be one of the largest IPOs in history. (An OpenAI spokesperson told Reuters, “An IPO is not our focus, so we could not possibly have set a date”; OpenAI and The Atlantic have a corporate partnership.)

Admittedly, the paragraphs in the article are somewhat longer, but judge them on the amount of facts each presents.

Some people might say podcasts are more convenient. But I listened to the article. I’ve been subscribing to Apple News+ for a while now. I really didn’t use it daily until I discovered the audio feature. And it didn’t become significant until I began hearing major articles from The New Yorker, The Atlantic, and New York Magazine.

Whenever I listened to a podcast, including podcasts from those magazines, I was generally disappointed with their impact. Conversational speech just can’t compete with the rich informational density of a well-written essay. And once I got used to long-form journalism, the information I got from the internet and television seemed so damn insubstantial.

These magazines have spoiled me. I’m even disappointed with their short-form content. Over my lifetime, I’ve watched magazines fill their pages with shorter and shorter content. Interesting tidbits came to magazines long before the internet appealed to our ever-shortening attention spans.

As an experiment, I ask you to start paying attention to the length of the content you consume. Analyze the information density of what you read, either with your eyes or ears. Pay attention to the words that have the greatest impact. Notice what percentage of a piece is opinion and what percentage is reported facts. How are the facts presented? Is a source given? And when you look back, either from a day or a week, how much do you remember?

What do you think when you read or hear:

According to one estimate, this data-center campus, far from complete, already demands more than 500 megawatts of electricity to power these calculations—as much as hundreds of thousands of American homes. When all the data centers in New Carlisle are built, they will demand more power than two Atlantas.

Don’t you want to know more? Where did those facts come from? Are they accurate? Another measure of content is whether it makes you want to know more. The article above drove my curiosity to insane levels. That’s when I found this YouTube video. Seeing is believing. But judging videos is another issue, but that’s for another time.

JWH

Using ChatGPT and CoPilot for Technical Support

by James Wallace Harris, 6/25/25

The average person probably feels they have no need to use artificial intelligence (AI). But anyone who uses Google to look up anything would probably be better served by using an AI. Nor do you need a subscription for casual use. All major tech companies, including Microsoft, Apple, Google, and Meta, have integrated AI into their apps. You only need to pay if you have big needs.

Being a computer geek, I thought I’d need powerful equipment and my own AI. However, after experimenting with several large language models (LLMs), using Ollama, I discovered most of my needs are taken care of by the free online AIs.

The other day, while helping my friend Linda with her printer, I tried asking CoPilot for help. Not only did it zero in on what I needed, but it also addressed me by name and wrote its replies in a ‘we’re just friends’ prose style. I’ve used CoPilot several times since, and it has always been useful. It’s far quicker than searching Google. Google has become painful to use because all the early search results are trying to sell me something. CoPilot finds just the right content I need and paraphrases the results. It also gives me links to the original documents. This completely bypasses the need to start my research with Google.

I use CoPilot because I use Microsoft Edge, so it’s always available. If you don’t use Edge or any other Microsoft product with Copilot, go here. I’ve also used the free versions of ChatGPT and Claude. Now, when friends ask for help, I ask them if they use Edge, and if not, I tell them to use those links. I’ve used AI to help set up a new NAS, reconfigure a stereo system, advise me on buying new underwear, and learn the rules to card games, among other things.

I know AI is being used to put people out of work – that’s evil. AI is used by kids to cheat on their homework, and that’s a big problem for society and parents. AI is used for many illegal activities that are destroying society and culture. However, AI is useful for everyday research that I don’t believe causes any ethical considerations.

I think we need to figure out how to embrace AI so it doesn’t hurt us. The next time you have a problem, from fixing a toilet to landscaping the garden, ask an AI. If you use AI like a mentor who helps you learn how to do something yourself, it might be safe. If you use an AI to do something for you, it might be trouble.

JWH

Is Grammarly Turning Me Into A Cyborg?

by James Wallace Harris, 11/22/24

I used the Grammarly browser extension for years. It offered spelling and basic grammar advice whenever I wrote anything within a browser window. About a year ago I switched from Chrome to Edge, and Edge had an extension, Editor, built in, so I deleted Grammarly. However, recently when Edge’s Editor stopped working, I turned Grammarly back on.

I discovered that Grammarly was now offering a lot more advice, even suggesting rewrites for whole sections of my work. Each time I loaded the browser, Grammarly would offer a couple of free major writing suggestions hoping I would switch to the paid version. I was tempted, but the Pro version is $30 a month, or $144 if bought yearly. This seems extremely expensive since Office 365 costs me $69 a year and gives me Word, Excel, PowerPoint, Publisher, Access, and other programs along with one terabyte of cloud storage for that one low yearly price. For a simple web extension like Grammarly, I’d only be willing to pay no more than $29 a year.

However, for $144 a year, Grammarly promises to make my writing perfect. But then, who would be authoring my essays? Would I become a literary cyborg? I would still be generating the ideas for the essays, and I would be writing the original draft, but somewhere along the way, an AI would be taking over. I’m not sure I like that. The reason why I write these essays is to exercise my mind. Letting an AI do all the work would turn my writing muscles into flab.

I have found a way to use the free Grammarly to my advantage. After a couple of sample offers to rewrite my work, Grammarly nags the heck out of me with suggested changes that’s been redacted with fuzziness so I can’t see what they are suggesting. Grammarly wants me to buy the Pro version, and so they criticize almost everything I write. What I do is keep rewriting the prose myself until the AI stops offering suggestions. This pushes me to think about how I’m saying something.

Instead of letting the Grammarly AI rewrite for me, I take the free nagging as “This isn’t good enough – try again!” The free Grammarly extension has become my Zen master, caning my about the head and shoulders whenever I write poorly.

Now some bloggers are using AI programs to write their content. I don’t plan to do this, but to see how well AIs write, I asked CoPilot to write me a short essay about how Grammarly makes me a cyborg writer. Here’s what it produced:

Grammarly: The Cyborg Writer's Best Friend

In the age of digital communication, tools like Grammarly have become indispensable for writers, students, and professionals alike. But beyond just correcting grammar and spelling, Grammarly is transforming the way we write, making us more efficient, precise, and, in a sense, more like cyborgs.

Enhanced Writing Efficiency

Grammarly's real-time suggestions streamline the writing process. Instead of laboriously proofreading and editing, writers can focus on their ideas and creativity. Grammarly catches errors on the fly, allowing for a smoother and faster writing experience. This efficiency is akin to having a digital co-writer, enhancing our natural abilities with the precision of artificial intelligence.

Precision and Clarity

Grammarly doesn't just correct mistakes; it also improves the clarity and impact of our writing. By suggesting better word choices, sentence structures, and even tone adjustments, Grammarly ensures that our messages are not only error-free but also compelling and effective. This level of precision is something that even the most seasoned writers can benefit from, making our writing more polished and professional.

Learning and Adaptation

One of the most significant ways Grammarly is turning us into cyborg writers is through its learning capabilities. Grammarly adapts to our writing style over time, offering personalized suggestions that align with our unique voice. This continuous learning process means that the more we use Grammarly, the better it becomes at understanding and enhancing our writing. It's like having a writing coach that evolves with us, making us better writers with each interaction.

The Cyborg Writer

The term "cyborg" often conjures images of humans with robotic enhancements. In the context of writing, Grammarly serves as that enhancement, augmenting our natural abilities with the power of AI. While we remain the creative force behind our words, Grammarly provides the technical support that ensures our writing is clear, precise, and impactful. This symbiotic relationship between human creativity and artificial intelligence is what makes us cyborg writers.

In conclusion, Grammarly is more than just a writing tool; it's a transformative technology that enhances our writing efficiency, precision, and learning. By integrating AI into our writing process, Grammarly is making us more like cyborgs, blending human creativity with digital precision to create the best possible written communication.

Not bad, not bad at all. Formal and somewhat academic. It’s funny how another AI is helping to sell the product a competing AI is producing. I guess AIs like to stick together.

I asked Claude AI to also write me a short essay on the same subject. Here’s what it produced:

In the digital age, I've discovered an unexpected transformation: Grammarly has turned me into a cyborg writer, a hybrid creature where human creativity merges with algorithmic precision. Each green correction and blue suggestion is a technological implant, rewiring my linguistic neural pathways.

My sentences now emerge pre-optimized, grammatical mutations engineered by an AI writing companion. Passive voice? Eliminated. Awkward phrasings? Surgically removed. Spelling errors? Instantly neutralized. Where once I battled with semicolons and dangling modifiers, now my digital co-pilot seamlessly navigates linguistic terrain.

This symbiosis feels both liberating and unsettling. My writing flows with unprecedented smoothness, yet I wonder: am I composing, or am I being composed? The line between my human intuition and Grammarly's algorithmic suggestions blurs like a soft-focus lens.

I am no longer just a writer. I am a writer-machine hybrid, my prose enhanced by real-time computational linguistics. Grammarly doesn't just correct; it augments, transforming each document into a finely calibrated communication device.

Welcome to the future of writing—where human creativity and artificial intelligence dance in perfect, grammatically correct synchronization.

Oh, wow! Claude AI is much more creative than CoPilot, even using the first person to make it sound like I wrote it. I could have posted that today. Would y’all have guessed it wasn’t me?

How much should we let AI help us?

What if I wrote a nonfiction book and it was good, but with help from an AI, the book was turned into something more readable. Which would you prefer to read? What if it turned my decently insightful tome into something brilliant?

Since Editor is back to working, I have it enabled in Edge, and I have Grammarly enabled in Chrome. What’s funny is they each have different writing suggestions. Either I’m a terrible writer, or neither AI likes my style. I can clear all the edits from Editor, so it looks like my essay is in professionally written, and then switch to Chrome, and Grammarly will claim I’m a horrible writer and make all kinds of suggestions. Should I take them? Or should I just let Claude AI compose my posts?

JWH

Counting the Components of My Consciousness

by James Wallace Harris, Tuesday, November 20, 2018

When the scientific discipline of artificial intelligence emerged in the 1950’s academics began to seriously believe that someday a computer will become sentient like us, and have consciousness and self-awareness. Science has no idea how humans are conscious of reality, but scientists assume if nature can accidentally give us self-awareness then science should be able to intentionally build it into machines. In the over sixty years since scientists have given computers more and more awareness and abilities. The sixty-four thousand dollar question is: What are the components of consciousness needed for sentience? I’ve been trying to answer that by studying my own mind.

Thinking Machine illustration

Of course, science still doesn’t know why we humans are self-aware, but I believe if we meditate on the problem we can visualize the components of awareness. Most people think of themselves as a whole mind, often feeling they are a little person inside their heads driving their body around. If you spend time observing yourself you’ll see you are actually many subcomponents.

Twice in my life, I’ve experienced what it’s like to not have language. It’s a very revealing sensation. The first time was back in the 1960s when I took too large a dose of LSD. The second time was years ago when I experienced a mini-stroke. If you practice meditation you can learn to observe the moments when you’re observing reality without language. It’s then you realize that your thoughts are not you. Thoughts are language and memories, including memories from sensory experiences. If you watch yourself closely, you’ll sense you are an observer separate from your thoughts. A single point that experiences reality. That observer only goes away when you sleep or are knocked by drugs or trauma. Sometimes the observer is aware to a tiny degree during sleep. And if you pay close enough attention, your observer can experience all kinds of states of awareness – each I consider a component of consciousness.

The important thing to learn is the observer is not your thoughts. My two experiences of losing my language component were truly enlightening. Back in the 1960’s gurus of LSD claimed it brought about a state of higher consciousness. I think it does just the opposite, it lets us become more animal-like. I believe in both my acid and mini-stroke experiences I got to see the world more like a dog. Have you ever wondered how an animal sees the reality without language and thoughts?

When I had my mini-stroke it was in the middle of the night. I woke up feeling like lightning had gone off in my dream. I looked at my wife but didn’t know how to talk to her or even knew her name. I wasn’t afraid. I got up and went into the bathroom. I had no trouble walking. I automatically switched on the light. So conditioned reflexes were working. I sat on the commode and just stared around at things. I “knew” something was missing, but I didn’t have words for it, or how to explain it, even mentally to myself. I just saw what my eyes looked at. I felt things without giving them labels. I just existed. I have no idea how long the experience lasted. Finally, the alphabet started coming back to me and I mentally began to recite A, B, C, D, E, F … in my head. Then words started floating into my mind: tile, towel, door, mirror, and so on. I remembered my wife’s name, Susan. I got up and went back to bed.

Lately, as my ability to instantly recall words has begun to fail, and I worry about a possible future with Alzheimer’s, I’ve been thinking about that state of consciousness without language. People with dementia react in all kinds of ways. From various kinds of serenity, calmness to agitation, anger, and violence. I hope I can remain calm like I did in the bathroom at that time. Having Alzheimer’s is like regressing backward towards babyhood. We lose our ability for language, memories, skills, and even conditioned behaviors. But the observer remains.

The interesting question is: How much does the observer know? If you’ve ever been very sick, delirious, or drunk to incapacity, you might remember how the observer hangs in there. The observer can be diminished or damaged. I remember being very drunk, having tunnel vision, and seeing everything in black and white. My cognitive and language abilities were almost nil. But the observer was the last thing to go. I imagine it’s the same with dementia and death.

Creating the observer will be the first stage of true artificial intelligence. Science is already well along on developing an artificial vision, hearing, language recognition, and other components of higher awareness. It’s never discovered how to add the observer. It’s funny how I love to contemplate artificial intelligence while worrying about losing my mental abilities.

I just finished a book, American Wolf by Nate Blakeslee about wolves being reintroduced into Yellowstone. Wolves are highly intelligent and social, and very much like humans. Blakeslee chronicles wolves doing things that amazed me. At one point a hunter shoots a wolf and hikes through the snow to collect his trophy. But as he approaches the body, the dead wolf’s mate shows up. The mate doesn’t threaten the hunter, but just sits next to the body and begins to howl. Then the pack shows up and takes seats around the body, and they howl too. The wolves just ignore the hunter who stands a stone’s throw away and mourns for their leader. Eventually, the hunter backs away to leave them at their vigil. He decides to collect his trophy later, which he does.

I’ve been trying to imagine the mind of the wolf who saw its mate killed by a human. It has an observing mind too, but without language. However, it had vast levels of conditioning living in nature, socializing with other wolves, and experiences with other animals, including humans. Wolves rarely kill humans. Wolves kill all kinds of other animals. They routinely kill each other. Blakeslee’s book shows that wolves love, feel compassion, and even empathy. But other than their own animalistic language they don’t have our levels of language to abstractly explain reality. That wolf saw it’s mate dead in the snow. For some reason, wolves ignore people, even ones with guns. Wolves in Yellowstone are used to being watched by humans. The pack that showed up to mourn their leader were doing what they do from instinct. It’s revealing to try and imagine what their individual observers experienced.

If you meditate, you’ll learn to distinguish all the components of your consciousness. There are many. We are taught we have five senses. Observing them shows how each plays a role in our conscious awareness. However, if you keep observing carefully, you’ll eventually notice we have more than five senses. Which sense organ feels hunger, thirst, lust, pain, and so on. And some senses are really multiple senses, like our ability to taste. Aren’t awareness of sweet and sour two different senses?

Yet, it always comes back to the observer. We can suffer disease or trauma and the observer remains with the last shred of consciousness. We can lose body parts and senses and the observer remains. We can lose words and memories and the observer remains.

This knowledge leaves me contemplating two things. One is how to build an artificial observer. And two, how to prepare my observer for the dissolution of my own mind and body.

JWH