Are Podcasts Wasting Our Time?

by James Wallace Harris, 11/16/25

While listening to the Radio Atlantic podcast, “What If AI Is a Bubble?,” a conversation between host Hanna Rosin and guest Charlie Warzel, I kept thinking I had heard this information before. I checked and found that I had read “Here’s How the AI Crash Happens” by Matteo Wong and Charlie Warzel, which Rosin had mentioned in her introduction.

Over the past year, I’ve been paying attention to how podcasts differ from long-form journalism. I’ve become disappointed with talking heads. I know podcasts are popular now, and I can understand their appeal. But I no longer have the patience for long chats, especially ones that spend too much time not covering the topic. All too often, podcasts take up excessive time for the amount of real information they cover.

What I’ve noticed is that the information density between podcasts and long-form journalism is very different. Here’s a quote, five paragraphs from the podcast:

WarzelThere’s a recent McKinsey report that’s been sort of passed around in these spheres where people are talking about this that said 80 percent of the companies they surveyed that were using AI discovered that the technology had no real—they said “significant”—impact on their bottom line, right?

So there’s this notion that these tools are not yet, at least as they exist now, as transformative as people are saying—and especially as transformative for productivity and efficiency and the stuff that leads to higher revenues. But there’s also these other reasons.

The AI boom, in a lot of ways, is a data-center boom. For this technology to grow, for it to get more powerful, for it to serve people better, it needs to have these data centers, which help the large language models process faster, which help them train better. And these data centers are these big warehouses that have to be built, right? There’s tons of square footage. They take a lot of electricity to run.

But one of the problems is with this is it’s incredibly money-intensive to build these, right? They’re spending tons of money to build out these data centers. So there’s this notion that there’s never enough, right? We’re going to need to keep building data centers. We’re going to need to increase the amount of power, right? And so what you have, basically, is this really interesting infrastructure problem, on top of what we’re thinking of as a technological problem.

And that’s a bit of the reason why people are concerned about the bubble, because it’s not just like we need a bunch of smart people in a room to push the boundaries of this technology, or we need to put a lot of money into software development. This is almost like reverse terraforming the Earth. We need to blanket the Earth in these data centers in order to make this go.

Contrast that with the opening five paragraphs of the article:

The AI boom is visible from orbit. Satellite photos of New Carlisle, Indiana, show greenish splotches of farmland transformed into unmistakable industrial parks in less than a year’s time. There are seven rectangular data centers there, with 23 more on the way.

Inside each of these buildings, endless rows of fridge-size containers of computer chips wheeze and grunt as they perform mathematical operations at an unfathomable scale. The buildings belong to Amazon and are being used by Anthropic, a leading AI firm, to train and run its models. According to one estimate, this data-center campus, far from complete, already demands more than 500 megawatts of electricity to power these calculations—as much as hundreds of thousands of American homes. When all the data centers in New Carlisle are built, they will demand more power than two Atlantas.

The amount of energy and money being poured into AI is breathtaking. Global spending on the technology is projected to hit $375 billion by the end of the year and half a trillion dollars in 2026. Three-quarters of gains in the S&P 500 since the launch of ChatGPT came from AI-related stocks; the value of every publicly traded company has, in a sense, been buoyed by an AI-driven bull market. To cement the point, Nvidia, a maker of the advanced computer chips underlying the AI boom, yesterday became the first company in history to be worth $5 trillion.

Here’s another way of thinking about the transformation under way: Multiplying Ford’s current market cap 94 times over wouldn’t quite get you to Nvidia’s. Yet 20 years ago, Ford was worth nearly triple what Nvidia was. Much like how Saudi Arabia is a petrostate, the U.S. is a burgeoning AI state—and, in particular, an Nvidia-state. The number keeps going up, which has a buoying effect on markets that is, in the short term, good. But every good earnings report further entrenches Nvidia as a precariously placed, load-bearing piece of the global economy.

America appears to be, at the moment, in a sort of benevolent hostage situation. AI-related spending now contributes more to the nation’s GDP growth than all consumer spending combined, and by another calculation, those AI expenditures accounted for 92 percent of GDP growth during the first half of 2025. Since the launch of ChatGPT, in late 2022, the tech industry has gone from making up 22 percent of the value in the S&P 500 to roughly one-third. Just yesterday, Meta, Microsoft, and Alphabet all reported substantial quarterly-revenue growth, and Reuters reported that OpenAI is planning to go public perhaps as soon as next year at a value of up to $1 trillion—which would be one of the largest IPOs in history. (An OpenAI spokesperson told Reuters, “An IPO is not our focus, so we could not possibly have set a date”; OpenAI and The Atlantic have a corporate partnership.)

Admittedly, the paragraphs in the article are somewhat longer, but judge them on the amount of facts each presents.

Some people might say podcasts are more convenient. But I listened to the article. I’ve been subscribing to Apple News+ for a while now. I really didn’t use it daily until I discovered the audio feature. And it didn’t become significant until I began hearing major articles from The New Yorker, The Atlantic, and New York Magazine.

Whenever I listened to a podcast, including podcasts from those magazines, I was generally disappointed with their impact. Conversational speech just can’t compete with the rich informational density of a well-written essay. And once I got used to long-form journalism, the information I got from the internet and television seemed so damn insubstantial.

These magazines have spoiled me. I’m even disappointed with their short-form content. Over my lifetime, I’ve watched magazines fill their pages with shorter and shorter content. Interesting tidbits came to magazines long before the internet appealed to our ever-shortening attention spans.

As an experiment, I ask you to start paying attention to the length of the content you consume. Analyze the information density of what you read, either with your eyes or ears. Pay attention to the words that have the greatest impact. Notice what percentage of a piece is opinion and what percentage is reported facts. How are the facts presented? Is a source given? And when you look back, either from a day or a week, how much do you remember?

What do you think when you read or hear:

According to one estimate, this data-center campus, far from complete, already demands more than 500 megawatts of electricity to power these calculations—as much as hundreds of thousands of American homes. When all the data centers in New Carlisle are built, they will demand more power than two Atlantas.

Don’t you want to know more? Where did those facts come from? Are they accurate? Another measure of content is whether it makes you want to know more. The article above drove my curiosity to insane levels. That’s when I found this YouTube video. Seeing is believing. But judging videos is another issue, but that’s for another time.

JWH

Reading With a Purpose

by James Wallace Harris, 11/12/25

I used to keep up with the world by watching NBC Nightly News with Lester Holt, reading The New York Times on my iPhone, and bingeing YouTube videos. I felt well-informed. That was an illusion.

I then switched to reading The Atlantic, New York Magazine, The New Yorker, and Harper’s Magazine. I focused on the longer articles and developed the habit of reading one significant essay a day. That has taught me how superficial my previous methods were at informing me about what’s going on around the world. Television, the internet, and newspapers were giving me soundbites, while articles provide an education.

However, I still tend to forget this deeper knowledge just as quickly. I don’t like that. I feel like I learn something significant every day. What I’m learning feels heavy and philosophical. However, it drives me nuts that I forget everything so quickly. And I’m not talking about dementia. I think we all forget quickly. Just remember how hard it was to prepare for tests back in school.

I’ve watched dozens of YouTube videos about study methods, and they all show that if you don’t put information to use, it goes away. Use it or lose it. I’ve decided to start reading with a purpose.

At first, I thought I would just save the best articles and refer to them when I wanted to remember. That didn’t work. I quickly forget where I read something. Besides, that approach doesn’t apply any reinforcing methods.

I then thought about writing a blog post for each article. It turns out it takes about a day to do that. And I still forget. I needed something simpler.

I then found Recall AI.

It reads and analyzes whatever webpage you’re on. Providing something like this for today’s article by Vann R. Newkirk II, “What Climate Change Will Do to America by Mid-Century:”

Recall allows me to save this into a structure. But again, this is a lot of work and takes a lot of time. If I were writing an essay or book, this would be a great tool for gathering research.

Recall is also great for understanding what I read. Helpful with quick rereading.

This morning, I got a new idea to try. What if I’m trying to remember too much? What if I narrowed down what I wanted to remember to something specific?

Within today’s article, the author used the term “climate gentrification” referring to neighborhoods being bought up because they were safer from climate change, and thus displacing poor people. The article mentions Liberty City, a poor neighborhood in Miami, with a slightly higher elevation, bought up by developers moving away from low-lying beachfront development.

I think I can remember that concept, climate gentrification. What if I only worked on remembering specific concepts? This got me thinking. I could collect concepts. As my collection grew, I could develop a classification system. A taxonomy of problems that humanity faces. Maybe a Dewey Decimal system of things to know.

I use a note-taking system called Obsidian. It uses hyperlinks to connect your notes, creating relationships between ideas. I could create a vault for collecting concepts. Each time I come across a new concept, I’d enter it into Obsidian, along with a citation where I found it. That might not be too much work.

I picked several phrases I want to remember and study:

  • Climate gentrification
  • Heat islands
  • Climate dead zones
  • Insurance market collapse
  • Climate change acceleration
  • Economic no-go zones
  • Corporate takeover of public services
  • Climate change inequality
  • Histofuturism
  • Sacrifice zones
  • Corporate feudalism

Contemplating this list made me realize that remembering where I read about each concept will take too much work. I have a browser extension, Readwell Reader, that lets me save the content of a web page. I could save every article I want to remember into a folder and then use a program to search for the concept words I remember to find them.

I just did a web search on “climate gentrification” and found it’s already in wide use. I then searched for “corporate feudalism,” and found quite a bit on it too. This suggests I’m onto something. That instead of trying to remember specifically what I read and where, I focus on specific emerging concepts.

Searching on “histofuturism” brought up another article at The Atlantic that references Octavia Butler: “How Octavia Butler Told the Future.” Today’s article by  Vann R. Newkirk II is also built around Octavia Butler. This complicates my plan. It makes me want to research the evolution of the concept, which could be very time-consuming.

The point of focusing on key concepts from my reading is to give my reading purpose that will help me remember. But there might be more to it. Concepts are being identified all the time. And they spread. They really don’t become useful until they enter the vernacular. Until a majority of people use a phrase like “climate gentrification,” the reality it points to isn’t visible.

That realization reinforces my hunch to focus on concepts rather than details in my reading. Maybe reading isn’t about specific facts, but about spreading concepts?

JWH

Avoiding Mirages in Reality Created By Words

by James Wallace Harris, 10/12/25

Humanity is plagued by delusions generated by words. We struggle to distinguish between words that point to aspects of reality and words that point to fictional mirages. In other words, we can’t differentiate between what is real and shit we make up.

I’m partial to an unverified quote attributed to James Michener, “The trick to life is to make it to 65 without being either a drunk or insane.” Sanity is notoriously hard to define. Many of us can stay sober until 65, but do any of us stay sane till then? Don’t we all end up seeing things that aren’t there? Don’t we all embrace cherished delusions to cope with life?

Of course, you will disagree with me. We all know what we believe is real.

Language allows us to be self-aware and manipulate reality, but don’t many of our words point to theoretical concepts that don’t actually exist in reality?

I recently read “The Real Stakes, and Real Story, of Peter Thiel’s Antichrist Obsession” in Wired Magazine. [Nearly everything I read is behind a paywall. I use Apple News+ to access hundreds of magazines and newspapers that exist behind a paywall. Wired shows the entire article for a few seconds. If you immediately right-click and select Print, a copy of this article can be read in your printer preview window. If you don’t catch it the first time, refresh the page. Or read other articles about this.]

Recently, Peter Thiel gave a four-part lecture on the Antichrist and the Apocalypse. In her Wired article, Laura Bullard attempts to decipher what Thiel is preaching.

By Thiel’s telling, the modern world is scared, way too scared, of its own technology. Our “listless” and “zombie” age, he said, is marked by a growing hostility to innovation, plummeting fertility rates, too much yoga, and a culture mired in the “endless Groundhog Day of the worldwide web.” But in its neurotic desperation to avoid technological Armageddon—the real threats of nuclear war, environmental catastrophe, runaway Al—modern civilization has become susceptible to something even more dangerous: the Antichrist.


According to some Christian traditions, the Antichrist is a figure that will unify humanity under one rule before delivering us to the apocalypse. For Thiel, its evil is pretty much synonymous with any attempt to unite the world. “How might such an Antichrist rise to power?” Thiel asked. “By playing on our fears of technology and seducing us into decadence with the Antichrist’s slogan: peace and safety.” In other words: It would yoke together a terrified species by promising to rescue it from the apocalypse.


By way of illustration, Thiel suggested that the Antichrist might appear in the form of someone like the philosopher Nick Bostrom—an Al doomer who wrote a paper in 2019 proposing to erect an emergency system of global governance, predictive policing, and restrictions on technology. But it wasn’t just Bostrom. Thiel saw potential Antichrists in a whole Zeitgeist of people and institutions “focused single- mindedly on saving us from progress, at any cost.”

So humanity is doubly screwed: It has to avoid both technological calamity and the reign of the Antichrist. But the latter was far more terrifying for the billionaire at the podium. For reasons grounded in Girardian theory, Thiel believed that such a regime could only—after decades of sickly, pent-up energy—set off an all-out explosion of vicious, civilization-ending violence. And he wasn’t sure whether any katechons could hold it off.

Thiel draws theology from the Bible, philosophy from studying with René Girard, and apparently combines them with ideas from Carl Schmitt, a political theorist from Nazi Germany, to create a rather bizarre warning about our future.

Because Thiel is a billionaire, he’s able to spread his beliefs widely. And because our society is overpopulated with people searching for meaning, we have a problem.

I’ve been collecting news stories and sorting them into two categories. The first deals with delusions that affect individuals. The second collects reports showing how we’re failing as a species. I could have filed this Wired article under both.

Whether as individuals or as a species, we act on false assumptions about reality. We often assume things to exist that don’t. Such as the Antichrist, or for that matter, The Christ. There may or may not have been a historical person we call Jesus. That may or may not have been his name. The concept of Christ was created over several generations of his followers. It has no real existence in reality. And neither does the Biblical Apocalypse or Antichrist. Those concepts have been redefined repeatedly over twenty centuries.

Among the thousands of Christian denominations that have existed over the past two millennia, there is no consensus on what Jesus preached or what is meant by the term Christ. In other words, there is no common denominator between Christians. This is because their beliefs are imaginary concepts that each individual redefines for their own use in words.

Religious beliefs are fine as long as they remain private to an individual, but when they are used to shape reality, they become dangerous. I often read about people who want to use their beliefs to make others conform to their illusions. That disturbs me.

But I’m only now realizing why. It represents a failure of language. Language is useful as long as words point to aspects of reality. The closer words stay to nouns and verbs that have a one-to-one relationship with things or actions within reality, the safer we are. It’s the words we fight over their definitions. That’s when things get dangerous.

Peter Thiel’s bizarre philosophy becomes dangerous when he can get others to accept his definitions. As I read news stories, I see this validated time and again. How many Russians and Ukrainians would be alive today who died because of Putin’s mirage of words? Look at any war, political conflict, or personal argument, and you can often trace it back to the ideas of one person.

Even my words here will incite some people.

As my last years fade away, I struggle to comprehend the years living in this reality. I’m starting to see that most of the confusion comes from interpreting words. The more I approach my experiences with the Zen-like acceptance of what is, the calmer things get. Eastern religions took a different approach to reality. In the West, we work to shape reality to our desires. Eastern philosophers teach that we should accept reality as it is. There are also dangers to that approach.

The reality is that humans create climate change. Many people can’t accept that reality. They use language that creates a mirage that many want to believe. That is one form of action. It’s a way of manipulating the perception of reality. That will actually work for those people, for a short while.

The weakness of our species is that we manipulate the perception of reality instead of actually making real changes in reality.

That’s how I now judge the news. A plane crash is a real event, not a mirage. But how often are men seeking power describing something real?

JWH