Am I Too Old To Start A Second Brain?

by James Wallace Harris, 12/8/25

For years now, I’ve been reading about people who create a second brain to record what they want to remember. Most of these second brain systems use software, but not all. Many base their ideas on the Zettelkasten system, which was originally stored on note cards.

Over the years, I’ve tried different methods and software applications. I’m currently learning Obsidian. I’ve used note cards, notebooks, Google Docs, Evernote, OneNote, InstaPaper, Recall, and others. I love reading – taking information in – but I don’t like taking notes.

The trouble is, information goes through my brain like a sieve. When I want to tell someone about what I’ve learned, or think I’ve learned, I can’t cite my source, or, for that matter, clearly state what I think I know. And I seldom think about how I’ve come to believe what I believe.

I’m currently reading False by Joe Pierre, MD, about how we all live with delusions. This book makes me want to rededicate myself to creating a second brain for two reasons. First, I want to take precise notes on this book because it offers dozens of insights about how we deceive ourselves, and about how other people are deceived and are deceiving. Second, the book inspires me to start tracking what I think I learn every day and study where that knowledge comes from.

One of the main ways we fool ourselves is with confirmation bias. Pierre says:

In real estate, it’s said that the most important guide to follow when buying a house and trying to understand home values is “location, location, location.” If I were asked about the most important guide to understand the psychology of believing strongly in things that aren’t true, I would similarly answer, “confirmation bias, confirmation bias, confirmation bias.”

Pierre explains how the Internet, Google, AIs, Social Media, and various algorithms reinforce our natural tendency toward confirmation bias.

Pierre claims there are almost 200 defined cognitive biases. Wikipedia has a nice listing of them. Wikipedia also has an equally nice, long list of fallacies. Look at those two lists; they are what Pierre is describing in his book.

Between these two lists, there are hundreds of ways we fool ourselves. They are part of our psychology. They explain how we interact with people and reality. However, everything is magnified by polarized politics, the Internet, Social Media, and now AI.

I’d like to create a second brain that would help me become aware of my own biases and fallacies. It would have been more useful if I had started this project when I was young. And I may be too old to overcome a lifetime of delusional thinking.

I do change the way I think sometimes. For example, most of my life, I’ve believed that it was important for humanity to go to Mars. Like Elon Musk, I thought it vital that we create a backup home for our species. I no longer believe either.

Why would I even think about Mars in the first place? I got those beliefs from reading dozens of nonfiction and fictional books about Mars. Why have I changed my mind? Because I have read dozens of articles that debunk those beliefs. In other words, my ideas came from other people.

I would like to create a second brain that tracks how my beliefs develop and change. Could maintaining a second brain help reveal my biases and thinking fallacies? I don’t know, but it might.

Doing the same thing and expecting different results is a common fallacy. Most of my friends are depressed and cynical about current events. Humanity seems to be in an immense Groundhog Day loop of history. Doesn’t it seem like liberals have always wanted to escape this loop, and conservatives wanted to embrace it?

If we have innate mental systems that are consistently faulty, how do we reprogram ourselves? I know my life has been one of repeatable behaviors. Like Phil Conners, I’m looking for a way out of the loop.

Stoicism seems to be the answer in old age. Is it delusional to think enlightenment might be possible?

JWH

Create and Control Your Own Algorithm

by James Wallace Harris, 12/6/25

If you get your news from social media sites, they will feed you what they learn you want to hear. Each site has its own algorithm to help you find the information you prefer. Such algorithms create echo chambers that play to your confirmation bias. It becomes a kind of digital mental masturbation.

Getting information from the internet is like drinking from a firehose. I hate to use such a cliche phrase, but it’s so true. Over the past decade, I’ve tried many ways to manage this flow of information. I’ve used RSS feed readers, news aggregators, social media sites, browser extensions, and smartphone apps. I’m always overwhelmed, and eventually, their algorithms feed me the same shitty content that thrills my baser self.

I’ve recently tried to reduce my information flow by subscribing to just four print magazines: Harper’s, The Atlantic, The New Yorker, and New York Magazine. I’m still deluged with news. However, I’m hoping the magazine editors will intelligently curate the news for me and keep me out of my own echo chamber.

I’ve even tried to limit my news intake to just one significant essay a day. For example, “The Chatbot-Delusion Crisis” by Matteo Wong from The Atlantic was yesterday’s read. Even while trying to control my own algorithm, I’ve been drawn to similar stories lately — about the dangers of social media and AI.

Today’s article, “When Chatbots Break Our Minds,” by Charlie Warzel, features an interview with Kashmir Hill. In the interview, Hill refers to her article in The New York Times, “Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.”

If I could program my own algorithm for news reading, one of the main features I’d hope to create is dazzling myself with news about important things I knew nothing about. I’d call such a feature Black Swan Reporting.

Another essential feature I’d want in my algorithm, I’d call Your Full of Shit. This subroutine would look for essays that show me how wrong or delusional I am. For example, for us liberals, we were deluded in thinking our cherished ideals made most Americans happy.

Another useful feature would be Significant News Outside the United States. For example, I listened to a long news story in one of my magazines about how Australia will soon enact a law that bans children under 16 from having social media accounts. This is a significant social experiment I hadn’t heard about, and one that other countries will try in 2026. None of my social media feeds let me know, but then maybe they want to keep such experiments secret.

Mostly, I’d want my algorithm to show me Important Things I Don’t Know, which is the exact opposite of what social media algorithms do.

However, I might need to go beyond one article a day to keep up with current events. That risks turning up the feed to fire hose velocity. How much news do we really need? I’m willing to give up an hour a day to one significant news story that’s educational and enlightening. I might be willing to give up another hour for several lighter but useful stories about reality.

I hate to admit it, but I doom scroll YouTube and Facebook one to two hours a day because of idle moments like resting after working in the yard or waking up in the middle of the night. And their algorithms have zeroed in on my favorite distractions, ones that are so shallow that I’m embarrassed to admit what they are.

The whole idea of creating a news algorithm driven by self-awareness is rather daunting. But I think we need to try. I’m reading too many stories about how we’re all damned by social media and AI.

I’m anxious to hear what kids in Australia do. Will they go outside and play, or will they find other things on their smartphones to occupy their time? What if the Australian government is forcing a generation to just play video games and look at porn?

JWH

Are Podcasts Wasting Our Time?

by James Wallace Harris, 11/16/25

While listening to the Radio Atlantic podcast, “What If AI Is a Bubble?,” a conversation between host Hanna Rosin and guest Charlie Warzel, I kept thinking I had heard this information before. I checked and found that I had read “Here’s How the AI Crash Happens” by Matteo Wong and Charlie Warzel, which Rosin had mentioned in her introduction.

Over the past year, I’ve been paying attention to how podcasts differ from long-form journalism. I’ve become disappointed with talking heads. I know podcasts are popular now, and I can understand their appeal. But I no longer have the patience for long chats, especially ones that spend too much time not covering the topic. All too often, podcasts take up excessive time for the amount of real information they cover.

What I’ve noticed is that the information density between podcasts and long-form journalism is very different. Here’s a quote, five paragraphs from the podcast:

WarzelThere’s a recent McKinsey report that’s been sort of passed around in these spheres where people are talking about this that said 80 percent of the companies they surveyed that were using AI discovered that the technology had no real—they said “significant”—impact on their bottom line, right?

So there’s this notion that these tools are not yet, at least as they exist now, as transformative as people are saying—and especially as transformative for productivity and efficiency and the stuff that leads to higher revenues. But there’s also these other reasons.

The AI boom, in a lot of ways, is a data-center boom. For this technology to grow, for it to get more powerful, for it to serve people better, it needs to have these data centers, which help the large language models process faster, which help them train better. And these data centers are these big warehouses that have to be built, right? There’s tons of square footage. They take a lot of electricity to run.

But one of the problems is with this is it’s incredibly money-intensive to build these, right? They’re spending tons of money to build out these data centers. So there’s this notion that there’s never enough, right? We’re going to need to keep building data centers. We’re going to need to increase the amount of power, right? And so what you have, basically, is this really interesting infrastructure problem, on top of what we’re thinking of as a technological problem.

And that’s a bit of the reason why people are concerned about the bubble, because it’s not just like we need a bunch of smart people in a room to push the boundaries of this technology, or we need to put a lot of money into software development. This is almost like reverse terraforming the Earth. We need to blanket the Earth in these data centers in order to make this go.

Contrast that with the opening five paragraphs of the article:

The AI boom is visible from orbit. Satellite photos of New Carlisle, Indiana, show greenish splotches of farmland transformed into unmistakable industrial parks in less than a year’s time. There are seven rectangular data centers there, with 23 more on the way.

Inside each of these buildings, endless rows of fridge-size containers of computer chips wheeze and grunt as they perform mathematical operations at an unfathomable scale. The buildings belong to Amazon and are being used by Anthropic, a leading AI firm, to train and run its models. According to one estimate, this data-center campus, far from complete, already demands more than 500 megawatts of electricity to power these calculations—as much as hundreds of thousands of American homes. When all the data centers in New Carlisle are built, they will demand more power than two Atlantas.

The amount of energy and money being poured into AI is breathtaking. Global spending on the technology is projected to hit $375 billion by the end of the year and half a trillion dollars in 2026. Three-quarters of gains in the S&P 500 since the launch of ChatGPT came from AI-related stocks; the value of every publicly traded company has, in a sense, been buoyed by an AI-driven bull market. To cement the point, Nvidia, a maker of the advanced computer chips underlying the AI boom, yesterday became the first company in history to be worth $5 trillion.

Here’s another way of thinking about the transformation under way: Multiplying Ford’s current market cap 94 times over wouldn’t quite get you to Nvidia’s. Yet 20 years ago, Ford was worth nearly triple what Nvidia was. Much like how Saudi Arabia is a petrostate, the U.S. is a burgeoning AI state—and, in particular, an Nvidia-state. The number keeps going up, which has a buoying effect on markets that is, in the short term, good. But every good earnings report further entrenches Nvidia as a precariously placed, load-bearing piece of the global economy.

America appears to be, at the moment, in a sort of benevolent hostage situation. AI-related spending now contributes more to the nation’s GDP growth than all consumer spending combined, and by another calculation, those AI expenditures accounted for 92 percent of GDP growth during the first half of 2025. Since the launch of ChatGPT, in late 2022, the tech industry has gone from making up 22 percent of the value in the S&P 500 to roughly one-third. Just yesterday, Meta, Microsoft, and Alphabet all reported substantial quarterly-revenue growth, and Reuters reported that OpenAI is planning to go public perhaps as soon as next year at a value of up to $1 trillion—which would be one of the largest IPOs in history. (An OpenAI spokesperson told Reuters, “An IPO is not our focus, so we could not possibly have set a date”; OpenAI and The Atlantic have a corporate partnership.)

Admittedly, the paragraphs in the article are somewhat longer, but judge them on the amount of facts each presents.

Some people might say podcasts are more convenient. But I listened to the article. I’ve been subscribing to Apple News+ for a while now. I really didn’t use it daily until I discovered the audio feature. And it didn’t become significant until I began hearing major articles from The New Yorker, The Atlantic, and New York Magazine.

Whenever I listened to a podcast, including podcasts from those magazines, I was generally disappointed with their impact. Conversational speech just can’t compete with the rich informational density of a well-written essay. And once I got used to long-form journalism, the information I got from the internet and television seemed so damn insubstantial.

These magazines have spoiled me. I’m even disappointed with their short-form content. Over my lifetime, I’ve watched magazines fill their pages with shorter and shorter content. Interesting tidbits came to magazines long before the internet appealed to our ever-shortening attention spans.

As an experiment, I ask you to start paying attention to the length of the content you consume. Analyze the information density of what you read, either with your eyes or ears. Pay attention to the words that have the greatest impact. Notice what percentage of a piece is opinion and what percentage is reported facts. How are the facts presented? Is a source given? And when you look back, either from a day or a week, how much do you remember?

What do you think when you read or hear:

According to one estimate, this data-center campus, far from complete, already demands more than 500 megawatts of electricity to power these calculations—as much as hundreds of thousands of American homes. When all the data centers in New Carlisle are built, they will demand more power than two Atlantas.

Don’t you want to know more? Where did those facts come from? Are they accurate? Another measure of content is whether it makes you want to know more. The article above drove my curiosity to insane levels. That’s when I found this YouTube video. Seeing is believing. But judging videos is another issue, but that’s for another time.

JWH

Reading With a Purpose

by James Wallace Harris, 11/12/25

I used to keep up with the world by watching NBC Nightly News with Lester Holt, reading The New York Times on my iPhone, and bingeing YouTube videos. I felt well-informed. That was an illusion.

I then switched to reading The Atlantic, New York Magazine, The New Yorker, and Harper’s Magazine. I focused on the longer articles and developed the habit of reading one significant essay a day. That has taught me how superficial my previous methods were at informing me about what’s going on around the world. Television, the internet, and newspapers were giving me soundbites, while articles provide an education.

However, I still tend to forget this deeper knowledge just as quickly. I don’t like that. I feel like I learn something significant every day. What I’m learning feels heavy and philosophical. However, it drives me nuts that I forget everything so quickly. And I’m not talking about dementia. I think we all forget quickly. Just remember how hard it was to prepare for tests back in school.

I’ve watched dozens of YouTube videos about study methods, and they all show that if you don’t put information to use, it goes away. Use it or lose it. I’ve decided to start reading with a purpose.

At first, I thought I would just save the best articles and refer to them when I wanted to remember. That didn’t work. I quickly forget where I read something. Besides, that approach doesn’t apply any reinforcing methods.

I then thought about writing a blog post for each article. It turns out it takes about a day to do that. And I still forget. I needed something simpler.

I then found Recall AI.

It reads and analyzes whatever webpage you’re on. Providing something like this for today’s article by Vann R. Newkirk II, “What Climate Change Will Do to America by Mid-Century:”

Recall allows me to save this into a structure. But again, this is a lot of work and takes a lot of time. If I were writing an essay or book, this would be a great tool for gathering research.

Recall is also great for understanding what I read. Helpful with quick rereading.

This morning, I got a new idea to try. What if I’m trying to remember too much? What if I narrowed down what I wanted to remember to something specific?

Within today’s article, the author used the term “climate gentrification” referring to neighborhoods being bought up because they were safer from climate change, and thus displacing poor people. The article mentions Liberty City, a poor neighborhood in Miami, with a slightly higher elevation, bought up by developers moving away from low-lying beachfront development.

I think I can remember that concept, climate gentrification. What if I only worked on remembering specific concepts? This got me thinking. I could collect concepts. As my collection grew, I could develop a classification system. A taxonomy of problems that humanity faces. Maybe a Dewey Decimal system of things to know.

I use a note-taking system called Obsidian. It uses hyperlinks to connect your notes, creating relationships between ideas. I could create a vault for collecting concepts. Each time I come across a new concept, I’d enter it into Obsidian, along with a citation where I found it. That might not be too much work.

I picked several phrases I want to remember and study:

  • Climate gentrification
  • Heat islands
  • Climate dead zones
  • Insurance market collapse
  • Climate change acceleration
  • Economic no-go zones
  • Corporate takeover of public services
  • Climate change inequality
  • Histofuturism
  • Sacrifice zones
  • Corporate feudalism

Contemplating this list made me realize that remembering where I read about each concept will take too much work. I have a browser extension, Readwell Reader, that lets me save the content of a web page. I could save every article I want to remember into a folder and then use a program to search for the concept words I remember to find them.

I just did a web search on “climate gentrification” and found it’s already in wide use. I then searched for “corporate feudalism,” and found quite a bit on it too. This suggests I’m onto something. That instead of trying to remember specifically what I read and where, I focus on specific emerging concepts.

Searching on “histofuturism” brought up another article at The Atlantic that references Octavia Butler: “How Octavia Butler Told the Future.” Today’s article by  Vann R. Newkirk II is also built around Octavia Butler. This complicates my plan. It makes me want to research the evolution of the concept, which could be very time-consuming.

The point of focusing on key concepts from my reading is to give my reading purpose that will help me remember. But there might be more to it. Concepts are being identified all the time. And they spread. They really don’t become useful until they enter the vernacular. Until a majority of people use a phrase like “climate gentrification,” the reality it points to isn’t visible.

That realization reinforces my hunch to focus on concepts rather than details in my reading. Maybe reading isn’t about specific facts, but about spreading concepts?

JWH

I Find This Very Disturbing

by James Wallace Harris, 11/7/25

I watched two YouTube videos yesterday that disturbed me. Both were about the impact of AI. The first was “How What I Do Is Threatened by AI” by Leo Notenboom. Leo has a website where he answers technical questions. He also makes videos about each problem. His traffic is down because many people are turning to AI to answer their technical questions. Leo eloquently discusses what this means to his business in this video. Asking AIs for help will impact many online companies, including Google. I already prefer to ask CoPilot to look something up for me rather than Googling it.

The next video was even more depressing. Julia McCoy reports “AI Just Killed Video Production: The 60-Second Revolution Nobody Saw Coming.” New tools allow anyone to produce videos featuring computer-generated people or cartoon characters who talk with computer-generated lip-synced voices. I’m already seeing tons of these, and I hate them. McCoy points out that old methods of video production required the skills of many different people, taking days, weeks, or months to produce. Those jobs are lost.

I love seeing little short videos on the web. I’ve always admired how ordinary people can be so creative. I saw YouTube and other sites giving millions of people opportunities to create a money-making business.

I’m often dazzled by computer-generated content. It is creative. But I don’t care about giving computers jobs. I admire people for their creative efforts.

Technology allowed millions of people to produce creative content. That was already overwhelming. I’m not sure if the world needs hundreds of millions of people with minimal ability producing zillions of creative works.

For example, I admire a handful of guys who review audiophile equipment. That handful does high-quality work. Then dozens of audiophiles produce so-so videos. I sometimes watch them, but usually not. Now, YouTube is flooded with videos reviewing audiophile equipment by computer-generated hosts with computer-generated voices, scraping information off the web, and using stock video for visuals. It sucks. It’s a perfect example of enshitafication and AI slop.

I’m not completely against AI. I ask AIs for help. I’m glad when AI does significant work. For example, just watch this video for 15 mind-blowing examples of AI successes:

However, we need to set limits. I love funny cat videos. But I don’t want to see funny cat videos generated by AI. I want to see real cats. If I watch a video of a pretty woman or a beautiful nature scene, I want to believe I’m seeing something that exists in reality. If I’m watching a funny cartoon, I want to know a human thought up the words and drew the pictures. Prompt engineering is creative, maybe even an emerging art form, but computers can generate infinities. Real art is often defined by limitations.

I admire people. I admire nature. Sure, I also admire stuff computers create, but when I do, I want to know that it was a work of a computer. Don’t fool with my sense of reality.

Sometimes, I do love doomscrolling. And I love watching YouTube videos for hours at a time. And I love wasting time on Facebook before going to bed. But all the AI sloop is spoiling it for me. What I really loved about all that content was admiring human creativity and natural beauty. I want to see the best, even if it’s produced by a computer. What spoils it is all the AI slop, poorly produced human-created content, and too much computer-produced content.

That’s why I’m returning to magazines. A magazine is limited in scope. Editors must curate what is put into the limited number of pages. Facebook, YouTube, etc., would be much better if they had editors.

I enjoyed TV the most in my life when there were only three channels. The best time in my life for music was when there were only two Top 40 AM channels. I liked reading science fiction far better when I just subscribed to The Magazine of Fantasy & Science Fiction, Galaxy, and Analog.

I’m not against AI. I’m just against too much of it.

JWH