Should We Accept/Reject AI?

by James Wallace Harris, 3/30/26

This morning, I listened to “The Hunt for Deepfakes” by Sarah Treleaven on Apple News+, from Maclean’s Magazine. Treleaven reported on a Toronto-area pharmacist who ran a deep fake porn site called MrDeepFakes. Don’t go looking for it; it’s been taken down. This site served up mostly AI-generated videos of famous movie stars having computer-generated sex, or ordinary women being degraded in AI-generated pornographic videos created for misogynistic revenge. This is just one example of AI being used for horrible reasons. My initial reaction was that we should ban all AI-generated content.

But last night I was admiring Reels on Facebook produced by Vintage Memories 66 that lovingly recreated videos of classic movie stars from the 1930s and 1940s. Because these actors and actresses are best known from black-and-white films, seeing their images in high-definition color is rewarding on various levels. The videos showed these long-dead people reincarnated. Is this a legitimate creative tool of AI that we should accept? It’s another kind of deep fake. I haven’t seen AI-generated porn, but if it’s as realistic as these videos, it could be psychologically disturbing.

I just finished reading and discussing If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky and Nate Soares. The book makes a great case that we should stop all work on AI now. If you don’t want to read the book, watch the video that makes the same case, also quite convincingly.

Even if AI doesn’t intentionally wipe out the human race, it will transform society in ways we can’t yet imagine. It’s already changed us significantly. Watch the film above; it dramatically illustrates how fast it could happen. Do we really want to be that changed, that transformed?

I love watching YouTube videos. I’m old, and I mostly stay home nowadays, so YouTube videos let me see the world. For example, I’m watching a woman who calls herself Itchy Boots ride a motorcycle across Mongolia. I admire creative people who come up with different ways to educate their viewers. The possibilities are endless.

Yet, a lot of content I see is AI slop. I don’t feel like I’m learning about reality when I see AI-generated content. I feel cheated. Then there are good documentaries about real history recreated with AI-generated visuals. I enjoyed learning from narration, but I’m offended by the visuals they show me that don’t match the words.

On their YouTube page, they inform us, “Written, produced, and edited by one person with the help of AI tools under KNOW MEDIA.” It is impressive that one person can compete with Ken Burns. I see that as a tremendous creative opportunity for people. They don’t say who this one person is, but it’s published under the Tech Now channel. I assume Know Media is this site, which appears to house many content creators.

AI is empowering such wannabe filmmakers. However, often their content annoys, insults, or repulses me. I hate artificial presenters. I hate artificial voices. I hate AI-generated images and videos that do not match what’s being described. I especially hate videos with obvious flaws, such as claiming to show China but obviously showing a Western country, misspelling words on the screen that the narrator is saying, showing people that obviously aren’t real people, etc. The list goes on and on.

If the AI video that went with the John Atanasoff documentary had looked real and accurate, I would have gladly accepted it. In other words, maybe I’m not protesting AI but bad AI.

We have to face the fact that AI will enhance the Seven Deadly Sins in all of us. But AI could supercharge the Seven Heavenly Virtues we should be pursuing. The trouble is, AI is too powerful. It’s like letting everyone own an atomic bomb. Are you willing to trust everyone?

I’ve been using AI to create header images for this blog. That’s because I have no artistic skills on my own. I used to just snag something from the internet, but I decided that wasn’t honest. But I’m not happy with the AI-generated headers either. I didn’t create them. Even when I like them and feel they’re creative, I’m leery of using those images. I’m trying to decide just how much I should use AI.

Sometimes I think I should reject AI completely. But doing searches on Google and Bing now returns AI content first. And it’s more useful than all those sites at the top of search returns that paid to be there. Do I want to return to libraries, card catalogs, and The Readers’ Guide to Periodical Literature?

In Dune, Frank Herbert had humanity reject AI. Could we do that? In many science fiction novels from the 1950s, writers imagined post-apocalyptic societies rejecting science and technology because people blamed the apocalypse on them. Do we have to wait until the apocalypse to make that decision?

Aren’t computer programs produced by Donald Knuth more creative than computer programs produced by Claude?

Notice that all the videos I presented used AI to a degree. This blog is probably published by varying levels of AI-assisted programming. Many people who read this post might have found it because of AI tracking of their reading habits. Rejecting AI could mean returning to technology that existed before the year 2000. What level of technology should we set that would make us the most human? I could make a case that people seemed nicer before the graphical interface.

Fueling my formative years in the 1960s and 1970s by reading science fiction, I was anxious for the future to arrive. I wanted to live in a world of intelligent robots, artificial intelligence, and space colonies. Now, I kind of wish I were back in the 1960s and 1970s.

JWH

The Coming of Household Robots

by James Wallace Harris, 3/26/26

By 2030, I expect to see robots for sale for $15,000 that will be as flexible as a Tai Chi master, stronger than an Olympian athlete, handier than a master plumber, and smarter than a college professor. These robots won’t have super-intelligence or self-awareness, because selling such beings would be unethical. Nor will they look human, although there will be another industry working to create human-looking androids that some people will buy for sex, but I don’t want to deal with that in this essay. I will predict that we’ll never create an android that can pass for human.

Will this technology disrupt society, blow up the economy, and derange human psychology? Can we integrate robots into our lives without destroying them? Science fiction has imagined possibilities since the 19th century, and fantasy, for even longer. Let’s examine some of the situations in which we might use a robot and extrapolate from that.

Already, hundreds of millions of people use AI today, so they can easily imagine conversations with an intelligent robot. And there are plenty of videos demonstrating the evolving physical abilities of robots. Recalling AI and robot progress from 2023 to 2026, it’s not difficult to imagine the progress this technology will achieve in the years 2026 to 2030.

The first thing we should do is visualize the robot you will buy. How tall do you want it to stand? Would a robot taller than you freak you out? Should the head have two eyes and a mouth, or would you be comfortable with a head with six eyes and no visible mouth? Should the body be humanoid? If so, should it wear clothes? If not, are there forms better suited for maximal utility? Do you want your robot to sit on the couch with you, or would you prefer it to stand?

Your wants will decide these choices. If you picture your robot kicking back in a La-Z-Boy and watching television with you, you’ll probably want it to be humanoid. If you buy a robot just for housework, yardwork, and home healthcare, you might purchase a robot that’s shaped easily to do the most chores. Right now, we buy robots to do individual tasks like vacuuming floors or mowing the lawn. But ultimately, wouldn’t it be more practical to have one robot that does everything rather than dozens of robots that do one thing?

Because so many millions use AI for conversation, I will assume faces will be important. Roboticists have experimented with giving robots facial expressions. And I’ve noticed that some robots in movies have body language, like C-3PO. Robots might not need to look very human to feel human.

I’ve been asking my friends about robots and their uses. One woman said she’s like a robot to share all her favorite activities and hobbies. She also said she sometimes wanted another husband, but ultimately decided they were too much trouble. Does that suggest we’re finding other people too much trouble, and we’d prefer machines?

Since most of my friends are around my age, their answers were much like my reasons. My wife can’t do much physically at all, and I’m getting less and less capable. I’d want a robot with the strength and stamina I had in my twenties to help me work around the house and in the yard. And as Susan and I got older, I’d want robots to be our live-in caretakers.

However, what if everyone did this? How many people would become unemployed? What happens to maids, gardeners, handymen, painters, car detailers, healthcare workers, and the other people we pay to come to our house? What if general-purpose household robots were also skilled at electrical work, plumbing, and maintaining HVACs?

If businesses replace white-collar workers with robots, and manufacturers replace factory workers with robots, and store owners replace retail workers with robots, what happens to the economy?

Some people worry that AI will become super-intelligent and want to wipe out humanity. That’s rather science fictional. But capitalists replacing labor with robots is all too real. Things are so complicated. How many people really want to wipe old people’s butts? Wouldn’t the wipers and the people being wiped prefer robots to have that job?

What jobs do humans want to keep, and which ones would they want to give to robots? And if you had no income, which jobs would you be willing to take?

I see owning a robot in old age as a prosthetic for my weakening body. It’s not to put someone else out of work, but to let me keep working on my own. But what if I were younger, and considering a robot to do housework? Right now, housework is good exercise for my mind and body. I keep telling my wife she should do housework to keep her from becoming an invalid. I tell her she shouldn’t let me hog all the healthy benefits of housework. She doesn’t buy that. 

Susan wants to hire a maid or cleaning service. Many of our friends have. I reply that as long as we’re strong enough to do housework ourselves, we should. But what if most people could afford robots to serve them? Would many people love living the upstairs lifestyle we see in Downton Abbey? Won’t that make us lazier? Will we become like astronauts, vigorously working out in the gym for two hours a day to make up for twenty-two hours of weightlessness?

Many people are questioning what social media, smartphones, and the Internet have done to society. Will AI and robots undermine human nature even more? It’s so hard to answer these questions. If millions of lonely people find comfort with AI and robots, is that bad? The obvious solution would be for half the lonely to meet up with the other half. 

Since those people aren’t doing that, does that suggest that something else is wrong? Could it be that some people prefer machines to other people? If so, the market for robots will be tremendous. So, even if we think AI and robots are bad for society, businesses will sell them, and we’ll buy them.

I should be out working in my yard. It needs a lot of work. But I rate the creative activity of writing this blog higher, so I’m skipping yardwork this morning. 

I can easily visualize a robot working outside, landscaping my yard, because of all the Ray Bradbury and Clifford Simak science fiction stories I’ve read. I don’t really like working in the yard, but I do wish it were nicer. I’d like to redesign my yard to maximize its benefits for insects, birds, and other wildlife. I wish my backyard were all wildflowers.

The idea of looking out the window that’s just behind my computer monitor and seeing a robot crafting a nature preserve for living creatures would be immensely pleasing. I can even imagine going for a walk in my neighborhood and seeing both people and robots working together and separately as I pass each yard. I even picture humans and robots walking dogs, stopping together to chat and let their pups sniff each other. In this daydream, I also see robots pushing old people in wheelchairs and babies in strollers. I also imagine coming home and finding Susan directing a robot to repaint the living room.

This is an idyllic fantasy. But is it one we really want?

JWH   

Are You Preparing for the 2030s?

by James Wallace Harris, 3/23/26

The 2030s will cover ages 78 to 88 for me. If you’re a teenager, the 2030s will cover your college and job-seeking years. If you’re in your twenties, the 2030s could be the years you get married and start a family. If I hadn’t started seriously saving in my fifties, I couldn’t have retired in the 2010s.

It is impossible to know the future. Even speculations based on extrapolation about present trends are nearly always wrong. However, it doesn’t hurt to be prepared like good Boy Scouts. In fact, I’ve heard luck defined as merely proper preparation.

For those of us who have orbited the Sun dozens of times, we have lived through constant change. We’ve learned that society never stays the same. People and their behaviors never seem to change, but relentless change churns our lives. Just observe the 14 decades of change in the films presented by TCM.

The iPhone was announced on January 9, 2007. I don’t even think Steve Jobs had any idea what it would do to the world in the 2010s and 2020s. Nor did we imagine what Amazon (1994), Facebook (2004), Twitter (2006), Instagram (2010), TikTok (2016), and ChatGPT (2022) would do to society. And how many folks expected the end of Globalism and the return to nationalism in the 2010s?

I believe two technologies will transform society in the 2030s: AI and intelligent general-purpose robots. Ever since the Industrial Age began, Capitalists have resented the cost of Labor. But the reason we’ve always needed capitalism is that it gives most citizens employment. Capital and Labor were always tied together, but AI and robots could disconnect the relationship. What will that mean in the 2030s?

I know AI and robots will transform society in ways that will be judged evil in the coming decade. However, I will consider buying robotic caretakers for my wife and me in the 2030s. For a childless couple in their 80s, robots might be the cheapest and most advantageous solution. We will all find reasons (excuses?) to go along.

Humanity should decide to halt all development in AI and robotics right now. But we won’t. Greed is a dependable predictor of human behavior, and the millionaires and billionaires who see trillions in AI aren’t going to allow their greed to be curtailed.

It’s the same way the owners of trillions in fossil fuels haven’t let the threat of climate change interfere with their greed. But we can’t put all the blame on the rich, because we’ve all continued to consume fossil fuels. The weakness of human nature is also a reliable predictor of the future.

Societies only change during violent upheavals, such as the American, French, and Russian revolutions, the Civil War, WWI, and WWII. Moral upheavals, such as feminism, civil rights, and LGBTQIA+ rights, have been less effective at creating long-term, permanent change. Even the great upheaval created by the Enlightenment might not be permanent.

When thinking about what we might experience in the 2030s, we need to consider Black Swans. Donald Trump was a political black swan in 2016. There’s always a chance we’ll elect an Abraham Lincoln black swan in 2028 that will pull the nation back together. But black swans can’t be predicted.

Predictions for artificial intelligence range from the extinction of humanity to an age of unlimited abundance. Because technology has been a reliable agent of change for many decades, it’s probably safe to assume that trend will continue.

For example, if battery technology improves as much as companies working on battery science promise, expect a huge transformation. If just the Donut Labs battery turns out to be real, no one will want fossil fuels because renewable energy will be so dramatically cheaper.

If Elon Musk keeps his promise to manufacture millions of general-purpose robots with AI-powered minds, what will that do to human employment? Of course, business owners will buy them, but what about you? Could you resist owning your own Jeeves? How many fans of Downton Abbey and The Gilded Age will try to create a cybernetic service class? If you asked people in the 1950s if they’d ever want a computer in their homes, 99.9999% of them would have said no.

We can’t imagine black swans; that’s part of their definition. But most of the spectacular changes in society have come from technology that began its existence decades before transforming society. To imagine the 2030s, look at everything discovered in the last two decades.

As a science fiction fan, this will sound odd, but I think science fiction is a poor weathervane for the future. I believe the best bellwethers for the next decade are always revealed in the current decade.

JWH

Ever Wonder Why Web Pages Keep Reloading on Your Phone? Or How Advertisers Know What You Are Thinking About Buying?

by James Wallace Harris, 3/20/26

I’ve practically stopped reading web pages on my phone because I can’t get to the end of an article without it reloading several times. That irritates the crap out of me. Yesterday, my friend Mike sent me a blog post that explains why web pages do this: “The 49MB Web Page.”

Shubham Bose realized while reading a page at the New York Times that it involved “422 network requests and 49 megabytes of data.” Bose is a software engineer and decided to deconstruct how and why. I highly recommend reading his explanation of what happens when you load a webpage. He also explains the hidden machinery that tracks our personal data.

My friend Anne and I joke that we can talk in person about something we’re interested in, and the next time we get on our computers, the algorithm is sending us information about what we talked about privately. Bose does not explain that apparent bit of mind-reading by our AI overload, but if we’re being observed in 422 ways each time we read a page, it can probably predict what we will think about soon.

Bose is an engineer interested in the user interface (UI) and user experience (UX), and recommends programming techniques that could make me like reading on my phone again.

Is that the real solution? Make our experience better so we don’t notice all the activity behind our reading?

Personally, I’m slowly returning to magazine reading. It’s hard to give up the convenience of the internet, but the UI and UX of print magazines are more enjoyable.

Magazines cost a lot of money and people naturally prefer free. But that’s another philosophical issue over technology. The internet provides endless free content, but is it really free? There’s a reason why free comes with 422 network calls and 49MB of spying programs.

My friend Linda and I are reading If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares. The book is about how we should worry that AI will wipe us out. The authors present many scenarios in which AIs could drive us to extinction. Most of them sound like science fiction, but there are mundane hints we should ponder.

This morning, I read “The Laid-off Scientists and Lawyers Training AI to Steal Their Careers” by Josh Dzieza about several companies that hire laid-off experts to train AIs to make fewer mistakes. Online systems entice desperate humans to work in digital sweatshops to train AIs to put other humans out of work. The same kind of monitoring used to sell us shit is used to track their work. The system traps them in a cycle of working for less and less money because they know these people are desperate to put food on the table and pay rent.

Is artificial intelligence doing this to us, or is it our own greed? At some point, we need to decide. There are many stories like this YouTube video, which suggest that AI can’t take our jobs.

It might be dangerous to get too comfortable with that idea. Because I also watched another video that shows how fast AIs are learning.

We have to decide, although our greed might not let us. One article and one video claim the solution is to develop a symbiotic relationship. But what happens when the AI gets smarter than us? If they don’t need us, will they want us around?

Many claim the internet brings out the worst in people, and it makes us overall dumber. There’s that old saying, “Give a man a fish, and you feed him for a day; teach a man to fish, and you feed him for a lifetime.” Isn’t AI and the internet teaching us how not to fish?

JWH

Past-Present-Future As It Relates to Fiction-Nonfiction-Fantasy-SF

by James Wallace Harris, 12/12/25

I’ve been contemplating how robot minds could succeed at explaining reality if they didn’t suffer the errors and hallucinations that current AIs do. Current AI minds evolve from training on massive amounts of words and images created by humans stored as digital files. Computer programs can’t tell fiction from fact based on our language. It’s no wonder they hallucinate. And like humans, they feel they must always have an answer, even if it’s wrong.

What if robots were trained on what they see with their own senses without using human language? Would robots develop their own language that described reality with greater accuracy than humans do with our languages?

Animals interact successfully with reality without language. But we doubt they are sentient in the way we are. But just how good is our awareness of reality if we constantly distort it with hallucinations and delusions? What if robots could develop consciousness that is more accurately self-aware of reality?

Even though we feel like a being inside a body, peering out at reality with five senses, we know that’s not true. Our senses recreate a model of reality that we experience. We enhance that experience with language. However, language is the source of all our delusions and hallucinations.

The primary illusion we all experience is time. We think there is a past, present, and future. There is only now. We remember what was, and imagine what will be, but we do that with language. Unfortunately, language is limited, misleading, and confusing.

Take, for instance, events in the New Testament. Thousands, if not millions, of books have been written on specific events that happened over two thousand years ago. It’s endless speculation trying to describe what happened in a now that no longer exists. Even describing an event that occurred just one year ago is impossible to recreate in words. Yet, we never stop trying.

To compound our delusions is fiction. We love fiction. Most of us spend hours a day consuming fiction—novels, television shows, movies, video games, plays, comics, songs, poetry, manga, fake news, lies, etc. Often, fiction is about recreating past events. Because we can’t accurately describe the past, we constantly create new hallucinations about it.

Then there is fantasy and science fiction. More and more, we love to create stories based on imagination and speculation. Fantasy exists outside of time and space, while science fiction attempts to imagine what the future might be like based on extrapolation and speculation.

My guess is that any robot (or being) that perceives reality without delusions will not use language and have a very different concept of time. Is that even possible? We know animals succeed at this, but we doubt how conscious they are of reality.

Because robots will have senses that take in digital data, they could use playback to replace language. Instead of one robot communicating to another robot, “I saw a rabbit,” they could just transmit a recording of what they saw. Like humans, robots will have to model reality in their heads. Their umwelt will create a sensorium they interact with. Their perception of now, like ours, will be slightly delayed.

However, they could recreate the past by playing a recording that filled their sensorium with old data recordings. The conscious experience would be indistinguishable from using current data. And if they wanted, they could generate data that speculated on the future.

Evidently, all beings, biological or cybernetic, must experience reality as a recreation in their minds. In other words, no entity sees reality directly. We all interact with it in a recreation.

Looking at things this way makes me wonder about consuming fiction. We’re already two layers deep in artificial reality. The first is our sensorium/umwelt, which we feel is reality. And the second is language, which we think explains reality, but doesn’t. Fiction just adds another layer of delusion. Mimetic fiction tries to describe reality, but fantasy and science fiction add yet another layer of delusion.

Humans who practice Zen Buddhism try to tune out all the illusions. However, they talk about a higher state of consciousness called enlightenment. Is that just looking at reality without delusion, or is it a new way of perceiving reality?

Humans claim we are the crown of creation because our minds elevate us over the animals, but is intelligence or consciousness really superior?

We apparently exist in a reality that is constantly evolving. Will consciousness be something reality tries and then abandons? Will robots with artificial intelligence become the next stage in this evolutionary process?

If we’re a failure, why copy us? Shouldn’t we build robots that are superior to us? Right now, AI is created by modeling the processes of our brains. Maybe we should rethink that. But if we build robots that have a higher state of consciousness, couldn’t we also reengineer our brains and create Human Mind 2.0?

What would that involve? We’d have to overcome the limitations of language. We’d also have to find ways to eliminate delusions and hallucinations. Can we consciously choose to do those things?

JWH