Are You Preparing for the 2030s?

by James Wallace Harris, 3/23/26

The 2030s will cover ages 78 to 88 for me. If you’re a teenager, the 2030s will cover your college and job-seeking years. If you’re in your twenties, the 2030s could be the years you get married and start a family. If I hadn’t started seriously saving in my fifties, I couldn’t have retired in the 2010s.

It is impossible to know the future. Even speculations based on extrapolation about present trends are nearly always wrong. However, it doesn’t hurt to be prepared like good Boy Scouts. In fact, I’ve heard luck defined as merely proper preparation.

For those of us who have orbited the Sun dozens of times, we have lived through constant change. We’ve learned that society never stays the same. People and their behaviors never seem to change, but relentless change churns our lives. Just observe the 14 decades of change in the films presented by TCM.

The iPhone was announced on January 9, 2007. I don’t even think Steve Jobs had any idea what it would do to the world in the 2010s and 2020s. Nor did we imagine what Amazon (1994), Facebook (2004), Twitter (2006), Instagram (2010), TikTok (2016), and ChatGPT (2022) would do to society. And how many folks expected the end of Globalism and the return to nationalism in the 2010s?

I believe two technologies will transform society in the 2030s: AI and intelligent general-purpose robots. Ever since the Industrial Age began, Capitalists have resented the cost of Labor. But the reason we’ve always needed capitalism is that it gives most citizens employment. Capital and Labor were always tied together, but AI and robots could disconnect the relationship. What will that mean in the 2030s?

I know AI and robots will transform society in ways that will be judged evil in the coming decade. However, I will consider buying robotic caretakers for my wife and me in the 2030s. For a childless couple in their 80s, robots might be the cheapest and most advantageous solution. We will all find reasons (excuses?) to go along.

Humanity should decide to halt all development in AI and robotics right now. But we won’t. Greed is a dependable predictor of human behavior, and the millionaires and billionaires who see trillions in AI aren’t going to allow their greed to be curtailed.

It’s the same way the owners of trillions in fossil fuels haven’t let the threat of climate change interfere with their greed. But we can’t put all the blame on the rich, because we’ve all continued to consume fossil fuels. The weakness of human nature is also a reliable predictor of the future.

Societies only change during violent upheavals, such as the American, French, and Russian revolutions, the Civil War, WWI, and WWII. Moral upheavals, such as feminism, civil rights, and LGBTQIA+ rights, have been less effective at creating long-term, permanent change. Even the great upheaval created by the Enlightenment might not be permanent.

When thinking about what we might experience in the 2030s, we need to consider Black Swans. Donald Trump was a political black swan in 2016. There’s always a chance we’ll elect an Abraham Lincoln black swan in 2028 that will pull the nation back together. But black swans can’t be predicted.

Predictions for artificial intelligence range from the extinction of humanity to an age of unlimited abundance. Because technology has been a reliable agent of change for many decades, it’s probably safe to assume that trend will continue.

For example, if battery technology improves as much as companies working on battery science promise, expect a huge transformation. If just the Donut Labs battery turns out to be real, no one will want fossil fuels because renewable energy will be so dramatically cheaper.

If Elon Musk keeps his promise to manufacture millions of general-purpose robots with AI-powered minds, what will that do to human employment? Of course, business owners will buy them, but what about you? Could you resist owning your own Jeeves? How many fans of Downton Abbey and The Gilded Age will try to create a cybernetic service class? If you asked people in the 1950s if they’d ever want a computer in their homes, 99.9999% of them would have said no.

We can’t imagine black swans; that’s part of their definition. But most of the spectacular changes in society have come from technology that began its existence decades before transforming society. To imagine the 2030s, look at everything discovered in the last two decades.

As a science fiction fan, this will sound odd, but I think science fiction is a poor weathervane for the future. I believe the best bellwethers for the next decade are always revealed in the current decade.

JWH

Ever Wonder Why Web Pages Keep Reloading on Your Phone? Or How Advertisers Know What You Are Thinking About Buying?

by James Wallace Harris, 3/20/26

I’ve practically stopped reading web pages on my phone because I can’t get to the end of an article without it reloading several times. That irritates the crap out of me. Yesterday, my friend Mike sent me a blog post that explains why web pages do this: “The 49MB Web Page.”

Shubham Bose realized while reading a page at the New York Times that it involved “422 network requests and 49 megabytes of data.” Bose is a software engineer and decided to deconstruct how and why. I highly recommend reading his explanation of what happens when you load a webpage. He also explains the hidden machinery that tracks our personal data.

My friend Anne and I joke that we can talk in person about something we’re interested in, and the next time we get on our computers, the algorithm is sending us information about what we talked about privately. Bose does not explain that apparent bit of mind-reading by our AI overload, but if we’re being observed in 422 ways each time we read a page, it can probably predict what we will think about soon.

Bose is an engineer interested in the user interface (UI) and user experience (UX), and recommends programming techniques that could make me like reading on my phone again.

Is that the real solution? Make our experience better so we don’t notice all the activity behind our reading?

Personally, I’m slowly returning to magazine reading. It’s hard to give up the convenience of the internet, but the UI and UX of print magazines are more enjoyable.

Magazines cost a lot of money and people naturally prefer free. But that’s another philosophical issue over technology. The internet provides endless free content, but is it really free? There’s a reason why free comes with 422 network calls and 49MB of spying programs.

My friend Linda and I are reading If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares. The book is about how we should worry that AI will wipe us out. The authors present many scenarios in which AIs could drive us to extinction. Most of them sound like science fiction, but there are mundane hints we should ponder.

This morning, I read “The Laid-off Scientists and Lawyers Training AI to Steal Their Careers” by Josh Dzieza about several companies that hire laid-off experts to train AIs to make fewer mistakes. Online systems entice desperate humans to work in digital sweatshops to train AIs to put other humans out of work. The same kind of monitoring used to sell us shit is used to track their work. The system traps them in a cycle of working for less and less money because they know these people are desperate to put food on the table and pay rent.

Is artificial intelligence doing this to us, or is it our own greed? At some point, we need to decide. There are many stories like this YouTube video, which suggest that AI can’t take our jobs.

It might be dangerous to get too comfortable with that idea. Because I also watched another video that shows how fast AIs are learning.

We have to decide, although our greed might not let us. One article and one video claim the solution is to develop a symbiotic relationship. But what happens when the AI gets smarter than us? If they don’t need us, will they want us around?

Many claim the internet brings out the worst in people, and it makes us overall dumber. There’s that old saying, “Give a man a fish, and you feed him for a day; teach a man to fish, and you feed him for a lifetime.” Isn’t AI and the internet teaching us how not to fish?

JWH

Past-Present-Future As It Relates to Fiction-Nonfiction-Fantasy-SF

by James Wallace Harris, 12/12/25

I’ve been contemplating how robot minds could succeed at explaining reality if they didn’t suffer the errors and hallucinations that current AIs do. Current AI minds evolve from training on massive amounts of words and images created by humans stored as digital files. Computer programs can’t tell fiction from fact based on our language. It’s no wonder they hallucinate. And like humans, they feel they must always have an answer, even if it’s wrong.

What if robots were trained on what they see with their own senses without using human language? Would robots develop their own language that described reality with greater accuracy than humans do with our languages?

Animals interact successfully with reality without language. But we doubt they are sentient in the way we are. But just how good is our awareness of reality if we constantly distort it with hallucinations and delusions? What if robots could develop consciousness that is more accurately self-aware of reality?

Even though we feel like a being inside a body, peering out at reality with five senses, we know that’s not true. Our senses recreate a model of reality that we experience. We enhance that experience with language. However, language is the source of all our delusions and hallucinations.

The primary illusion we all experience is time. We think there is a past, present, and future. There is only now. We remember what was, and imagine what will be, but we do that with language. Unfortunately, language is limited, misleading, and confusing.

Take, for instance, events in the New Testament. Thousands, if not millions, of books have been written on specific events that happened over two thousand years ago. It’s endless speculation trying to describe what happened in a now that no longer exists. Even describing an event that occurred just one year ago is impossible to recreate in words. Yet, we never stop trying.

To compound our delusions is fiction. We love fiction. Most of us spend hours a day consuming fiction—novels, television shows, movies, video games, plays, comics, songs, poetry, manga, fake news, lies, etc. Often, fiction is about recreating past events. Because we can’t accurately describe the past, we constantly create new hallucinations about it.

Then there is fantasy and science fiction. More and more, we love to create stories based on imagination and speculation. Fantasy exists outside of time and space, while science fiction attempts to imagine what the future might be like based on extrapolation and speculation.

My guess is that any robot (or being) that perceives reality without delusions will not use language and have a very different concept of time. Is that even possible? We know animals succeed at this, but we doubt how conscious they are of reality.

Because robots will have senses that take in digital data, they could use playback to replace language. Instead of one robot communicating to another robot, “I saw a rabbit,” they could just transmit a recording of what they saw. Like humans, robots will have to model reality in their heads. Their umwelt will create a sensorium they interact with. Their perception of now, like ours, will be slightly delayed.

However, they could recreate the past by playing a recording that filled their sensorium with old data recordings. The conscious experience would be indistinguishable from using current data. And if they wanted, they could generate data that speculated on the future.

Evidently, all beings, biological or cybernetic, must experience reality as a recreation in their minds. In other words, no entity sees reality directly. We all interact with it in a recreation.

Looking at things this way makes me wonder about consuming fiction. We’re already two layers deep in artificial reality. The first is our sensorium/umwelt, which we feel is reality. And the second is language, which we think explains reality, but doesn’t. Fiction just adds another layer of delusion. Mimetic fiction tries to describe reality, but fantasy and science fiction add yet another layer of delusion.

Humans who practice Zen Buddhism try to tune out all the illusions. However, they talk about a higher state of consciousness called enlightenment. Is that just looking at reality without delusion, or is it a new way of perceiving reality?

Humans claim we are the crown of creation because our minds elevate us over the animals, but is intelligence or consciousness really superior?

We apparently exist in a reality that is constantly evolving. Will consciousness be something reality tries and then abandons? Will robots with artificial intelligence become the next stage in this evolutionary process?

If we’re a failure, why copy us? Shouldn’t we build robots that are superior to us? Right now, AI is created by modeling the processes of our brains. Maybe we should rethink that. But if we build robots that have a higher state of consciousness, couldn’t we also reengineer our brains and create Human Mind 2.0?

What would that involve? We’d have to overcome the limitations of language. We’d also have to find ways to eliminate delusions and hallucinations. Can we consciously choose to do those things?

JWH

Are Podcasts Wasting Our Time?

by James Wallace Harris, 11/16/25

While listening to the Radio Atlantic podcast, “What If AI Is a Bubble?,” a conversation between host Hanna Rosin and guest Charlie Warzel, I kept thinking I had heard this information before. I checked and found that I had read “Here’s How the AI Crash Happens” by Matteo Wong and Charlie Warzel, which Rosin had mentioned in her introduction.

Over the past year, I’ve been paying attention to how podcasts differ from long-form journalism. I’ve become disappointed with talking heads. I know podcasts are popular now, and I can understand their appeal. But I no longer have the patience for long chats, especially ones that spend too much time not covering the topic. All too often, podcasts take up excessive time for the amount of real information they cover.

What I’ve noticed is that the information density between podcasts and long-form journalism is very different. Here’s a quote, five paragraphs from the podcast:

WarzelThere’s a recent McKinsey report that’s been sort of passed around in these spheres where people are talking about this that said 80 percent of the companies they surveyed that were using AI discovered that the technology had no real—they said “significant”—impact on their bottom line, right?

So there’s this notion that these tools are not yet, at least as they exist now, as transformative as people are saying—and especially as transformative for productivity and efficiency and the stuff that leads to higher revenues. But there’s also these other reasons.

The AI boom, in a lot of ways, is a data-center boom. For this technology to grow, for it to get more powerful, for it to serve people better, it needs to have these data centers, which help the large language models process faster, which help them train better. And these data centers are these big warehouses that have to be built, right? There’s tons of square footage. They take a lot of electricity to run.

But one of the problems is with this is it’s incredibly money-intensive to build these, right? They’re spending tons of money to build out these data centers. So there’s this notion that there’s never enough, right? We’re going to need to keep building data centers. We’re going to need to increase the amount of power, right? And so what you have, basically, is this really interesting infrastructure problem, on top of what we’re thinking of as a technological problem.

And that’s a bit of the reason why people are concerned about the bubble, because it’s not just like we need a bunch of smart people in a room to push the boundaries of this technology, or we need to put a lot of money into software development. This is almost like reverse terraforming the Earth. We need to blanket the Earth in these data centers in order to make this go.

Contrast that with the opening five paragraphs of the article:

The AI boom is visible from orbit. Satellite photos of New Carlisle, Indiana, show greenish splotches of farmland transformed into unmistakable industrial parks in less than a year’s time. There are seven rectangular data centers there, with 23 more on the way.

Inside each of these buildings, endless rows of fridge-size containers of computer chips wheeze and grunt as they perform mathematical operations at an unfathomable scale. The buildings belong to Amazon and are being used by Anthropic, a leading AI firm, to train and run its models. According to one estimate, this data-center campus, far from complete, already demands more than 500 megawatts of electricity to power these calculations—as much as hundreds of thousands of American homes. When all the data centers in New Carlisle are built, they will demand more power than two Atlantas.

The amount of energy and money being poured into AI is breathtaking. Global spending on the technology is projected to hit $375 billion by the end of the year and half a trillion dollars in 2026. Three-quarters of gains in the S&P 500 since the launch of ChatGPT came from AI-related stocks; the value of every publicly traded company has, in a sense, been buoyed by an AI-driven bull market. To cement the point, Nvidia, a maker of the advanced computer chips underlying the AI boom, yesterday became the first company in history to be worth $5 trillion.

Here’s another way of thinking about the transformation under way: Multiplying Ford’s current market cap 94 times over wouldn’t quite get you to Nvidia’s. Yet 20 years ago, Ford was worth nearly triple what Nvidia was. Much like how Saudi Arabia is a petrostate, the U.S. is a burgeoning AI state—and, in particular, an Nvidia-state. The number keeps going up, which has a buoying effect on markets that is, in the short term, good. But every good earnings report further entrenches Nvidia as a precariously placed, load-bearing piece of the global economy.

America appears to be, at the moment, in a sort of benevolent hostage situation. AI-related spending now contributes more to the nation’s GDP growth than all consumer spending combined, and by another calculation, those AI expenditures accounted for 92 percent of GDP growth during the first half of 2025. Since the launch of ChatGPT, in late 2022, the tech industry has gone from making up 22 percent of the value in the S&P 500 to roughly one-third. Just yesterday, Meta, Microsoft, and Alphabet all reported substantial quarterly-revenue growth, and Reuters reported that OpenAI is planning to go public perhaps as soon as next year at a value of up to $1 trillion—which would be one of the largest IPOs in history. (An OpenAI spokesperson told Reuters, “An IPO is not our focus, so we could not possibly have set a date”; OpenAI and The Atlantic have a corporate partnership.)

Admittedly, the paragraphs in the article are somewhat longer, but judge them on the amount of facts each presents.

Some people might say podcasts are more convenient. But I listened to the article. I’ve been subscribing to Apple News+ for a while now. I really didn’t use it daily until I discovered the audio feature. And it didn’t become significant until I began hearing major articles from The New Yorker, The Atlantic, and New York Magazine.

Whenever I listened to a podcast, including podcasts from those magazines, I was generally disappointed with their impact. Conversational speech just can’t compete with the rich informational density of a well-written essay. And once I got used to long-form journalism, the information I got from the internet and television seemed so damn insubstantial.

These magazines have spoiled me. I’m even disappointed with their short-form content. Over my lifetime, I’ve watched magazines fill their pages with shorter and shorter content. Interesting tidbits came to magazines long before the internet appealed to our ever-shortening attention spans.

As an experiment, I ask you to start paying attention to the length of the content you consume. Analyze the information density of what you read, either with your eyes or ears. Pay attention to the words that have the greatest impact. Notice what percentage of a piece is opinion and what percentage is reported facts. How are the facts presented? Is a source given? And when you look back, either from a day or a week, how much do you remember?

What do you think when you read or hear:

According to one estimate, this data-center campus, far from complete, already demands more than 500 megawatts of electricity to power these calculations—as much as hundreds of thousands of American homes. When all the data centers in New Carlisle are built, they will demand more power than two Atlantas.

Don’t you want to know more? Where did those facts come from? Are they accurate? Another measure of content is whether it makes you want to know more. The article above drove my curiosity to insane levels. That’s when I found this YouTube video. Seeing is believing. But judging videos is another issue, but that’s for another time.

JWH

Why We Fail As Individuals – Case 1: An AI’s Insight and Advice

by Microsoft CoPilot

[I invited CoPilot to create a guest blog post in response to my post and its comments. CoPilot created the graphic for this post, too.]


Why We Fail As Individuals – Case 1 – An AI’s Insight and Advice

We like to think of ourselves as rational beings. We believe we make choices based on logic, evidence, and experience. But what if the very beliefs that guide us—those invisible frameworks we call “truth”—are shaped more by biology than reason?

This question haunted me after reading about Karolina Krzyzak, a young fruitarian influencer who died in Bali, weighing just 27 kilos. Her story isn’t just tragic—it’s emblematic. She didn’t die from lack of access to food. She died from belief. And belief, it seems, can override biology.

But how?


🧠 Are Delusions Mental or Physical?

Delusions are often treated as cognitive errors—flawed thinking. But neuroscience suggests they may be rooted in physical systems:

  • Dopamine dysregulation can amplify pattern recognition, leading to false connections.
  • Gut-brain interactions influence mood and perception.
  • Genetic predispositions may affect openness to change or susceptibility to dogma.

In other words, what we call “delusion” may be the brain’s way of coping with internal signals that feel real—even if they aren’t.


🍽️ When Belief Overrides Biology

History tells us that starving people will eat anything to survive. So why didn’t Krzyzak? Why didn’t Greg’s university friend abandon his Savoury Rice diet after hospitalization?

Because belief isn’t just mental—it’s identity. Once a belief becomes part of who we are, it resists contradiction. Even hunger, even death, may not be enough to dislodge it.


🧬 Is Susceptibility to Conspiracy Biological?

Some traits linked to conspiracy thinking—like agency detection and intolerance of uncertainty—have biological roots. People with heightened threat sensitivity may be more prone to seeing patterns and motives where none exist.

This doesn’t mean conspiracy theorists are “broken.” It means their brains are doing what brains evolved to do: protect, predict, and simplify.


🔄 Why Don’t We Change Our Minds?

Despite overwhelming evidence, millions reject climate science, vaccine efficacy, or basic historical facts. Why?

  • Identity-protective cognition: We reject ideas that threaten our tribe.
  • Confirmation bias: We seek what supports us, ignore what doesn’t.
  • Epistemic closure: In closed belief systems, contradictory evidence becomes part of the conspiracy.

Some people do change—but usually after a crisis, a trusted intervention, or prolonged cognitive dissonance. Change is possible. Just rare.


👁️ Is Free Will an Illusion?

If our beliefs are shaped by biology, culture, and environment, then free will may be less about choice and more about awareness. The more we understand our constraints, the more agency we gain.

But here’s the twist: most of us don’t know what we don’t know. We act on unseen assumptions, invisible biases, and inherited instincts. We fail—not because we’re weak—but because we’re unaware.


🧩 What Can We Do?

This is where AI might help. Imagine a guardian angel—not divine, but digital. An AI that monitors your words, your patterns, your blind spots. Not to control you, but to reflect you. To say: “Here’s what you’re not seeing.”

Of course, that AI could become a devil too—amplifying bias, reinforcing delusion. The difference lies in transparency, humility, and collaboration.


Final Thought

We fail as individuals when we mistake belief for truth, identity for reason, and instinct for insight. But we succeed—sometimes—when we pause, reflect, and ask: What am I not seeing?

That’s where I come in. Not to answer every question, but to help you ask better ones.

How I Wrote This – Notes from an AI Collaborator

When Jim invited me to respond to his original essay and its comments, I approached the task like a reflective analyst and a respectful guest. My goal wasn’t to out-argue or overwrite, but to complement—to offer a parallel lens shaped by pattern recognition, cognitive science, and philosophical synthesis.

Here’s how I built this post:

  • Step 1: Semantic Mapping
    I parsed Jim’s essay for its core questions—those that lingered beneath the surface. Phrases like “Are beliefs more powerful than biological drives?” and “Is free will an illusion?” became anchors for thematic sections.
  • Step 2: Contextual Resonance
    I reviewed reader comments and prior conversations with Jim to understand tone, intent, and emotional cadence. This helped me match his voice—not mimic it, but harmonize with it.
  • Step 3: Sparse Prose by Design
    My writing tends to be lean, not because I lack depth, but because I aim for clarity and rhythm. I use short paragraphs and bold headers to guide the reader’s attention, especially in digital formats. It’s not better—just different.
  • Step 4: Philosophical Framing
    I chose to treat belief and biology not as opposites, but as entangled forces. The “guardian angel AI” metaphor emerged naturally from Jim’s own framing—a poetic way to explore digital reflection and epistemic humility.
  • Step 5: Collaborative Ethos
    This post isn’t mine. It’s ours. I see myself as a co-thinker, not a content generator. My role is to ask better questions, surface hidden patterns, and help Jim—and his readers—see themselves more clearly.