Can We Fight Back Against Enshitification?

by James Wallace Harris, 2/9/26

“Enshitification” is the trendy catchword of the moment. Cory Doctorow coined this handy term and describes what it means in his latest book, Enshittification: Why Everything Suddenly Got Worse and What to Do About It. However, I don’t think you need to read the book to get the idea. At a minimum, just listen to the interview with Doctorow and Tim Wu below on the Ezra Klein show titled “We Didn’t Ask for This Internet“:

Tim Wu covers similar ground in his book The Age of Extraction.

For my purposes, I use both terms to point to a specific kind of corporate greed that’s making our lives miserable. We could use both terms in this sentence: The relentless extraction of wealth is leading the enshitification of society.

Cory Doctorow uses the Internet to illustrate the process. Every program, app, or site begins life doing something wonderful for users. Often, their creators promise to always keep their users’ best interests at the core of their business model. But as time goes on and they need to keep making more money, they forget that promise. Eventually, they will do anything to get more users and more money.

Tim Wu models his term on the evils of private equity and similar practices. For example, in the interview, Wu gives this evil example:

In America, hospitals preferentially hire nurses through apps. And they do so as contractors. Hiring contractors means that you can avoid the unionization of nurses. And when a nurse signs on to get a shift through one of these apps, the app is able to buy the nurse’s credit history.

The reason for that is that the U.S. government has not passed a new federal consumer privacy law since 1988, when Ronald Reagan signed a law that made it illegal for video store clerks to disclose your VHS rental habits.

Every other form of privacy invasion of your consumer rights is lawful under federal law. So among the things that data brokers will sell to anyone who shows up with a credit card is how much credit card debt any other person is carrying, and how delinquent it is.

Based on that, the nurses are charged a kind of desperation premium. The more debt they’re carrying, the more overdue that debt is, the lower the wage that they’re offered, on the grounds that nurses who are facing economic privation and desperation will accept a lower wage to do the same job.

Now this is not a novel insight. Paying more desperate workers less money is a thing that you can find in, like, Tennessee Ernie Ford songs about 19th-century coal bosses. The difference is that if you’re a 19th-century coal boss who wants to figure out how much the lowest wage each coal miner you’re hiring is willing to take, you have to have an army of Pinkertons who are figuring out the economic situation of every coal miner, and you have to have another army of guys in green eye shades who are making annotations to the ledger where you’re calculating their pay packet. It’s just not practical. So automation makes this possible.

Doesn’t that sound like a cross between Nineteen Eighty-Four and the way China monitors its citizens? Wu is seeing how the extraction of wealth is doing something just as evil, but we could call it enshitification too.

Another example, this time from my New York Magazine subscription, “Body Cam Hustle” is about how people are making money off of videos of drunk drivers taken by the police. States enacted laws requiring police to wear body cameras to gather evidence and protect the innocent. The Internet went from promoting cute cat videos to scenes of personal shame. To show how society is also just as corrupt, audiences prefer seeing women being arrested.

I doubt I need to give any more examples, we all instantly recognize the genius of coining the word enshitification.

Cory Doctorow and Ezra Klein recall fond memories and hopes the Internet gave them when they were young. But it seems the Internet turns everything to shit eventually.

Does every sucky thing that depresses us most today connect to the Internet?

And more importantly, can we fight enshitification?

One area where I noticed people fighting back is with subscriptions. Tim Wu says subscriptions are the new, and more efficient, method of extraction. People are switching to Linux, free and open source software, unsubscribing from cloud storage, and going back to DVDs, CDs, and LPs.

Other people are taking up analog hobbies like sewing, gardening, woodworking, cooking, and handicrafts. Young people feel they are embracing the hobbies their grandparents pursued.

And other people are buying local rather than ordering online.

On the other hand, millions are adopting AI and racing full steam ahead into a dark Blade Runner-like cyberpunk future.

Does running from the clutches of Microsoft or Apple into the arms of Linux really help us escape enshitification? If Facebook and X are evil, does it make them less evil to access from Fedora and use the Brave browser? (I’m writing this post from Linux, and it’s been a struggle not to use all my favorite software tools on Windows.)

Would we be happier if we shut off the Internet and went back to televisions with antennas? I’ve contemplated what that would be like. My initial fear is that it would be lonelier. I don’t know why. I have many friends I see regularly. I guess the hive mind feels more connected.

I think we like to share. To communicate with like-minded people regarding our specific interests. Before the Internet, I was involved with science fiction fandom. I published fanzines, belonged to Amateur Press Associations (APAs), was part of a local science fiction club, and went to conventions.

I suppose I could regress.

But do people do that? Shouldn’t we figure out how to move forward and solve our enshitification problems? But how?

What if we split the internet into two segments? We keep the existing Internet, and create a new one that requires identity verification. To get a login would require visiting an agency in person and providing proof of your identity. Like when we got Read IDs. But also connect that identity to three types of biometric data. The login to the new Internet would have to be absolutely foolproof, otherwise people wouldn’t trust it.

I know this sounds scary and dangerous, but we’re already doing this piecemeal. Both corporations and criminals already know who we are.

Would people behave better on the Internet if they knew everyone knew exactly who they were? I assume that with such tracking of real identities, it would be almost impossible to rip people off since all activity would have a well-documented trail.

For this to work, corporations would have to be just as open and upfront. They would have to make all their log files public. So any individual could examine all the ways they are being tracked.

Is a much of enshitification due to anonymity and hidden corporate practices?

What if everything we did on the Internet was out in full sunlight?

I have no idea if this would help. It could make things much worse. But isn’t everything already getting much worse?

JWH

What I Learned After Buying a UGreen DXP2800 NAS

by James Wallace Harris, 1/7/26

Don’t bother reading this essay unless you’re considering the following:

  • Want to cancel your subscription to a cloud storage site
  • Manage terabytes of data
  • Hope to convert your old movies on discs to Jellyfin or Plex
  • Want to run Linux programs via Docker

For the past few years, I’ve been watching YouTubers promote NASes (Network Attached Storage). Last year, I just couldn’t help myself, I bought a UGreen DXP2800. I’m not sure I needed a NAS. Dropbox has been serving me well for over fifteen years.

[My DXP2800 is pictured above on top of a bookcase. It’s connected to a UPS and a mesh router. It’s a little noisy, but not bad.]

Actually, I loved Dropbox until I figured it was the reason my computers ran warm and noisy. I assume that was because it routinely checked tens of thousands of my files to keep them indexed, copied, and up-to-date on my three computers, two tablets, and an iPhone.

Lesson #1. If you desire simplicity, stay with the cloud. My old system was to use Dropbox and let it keep copies of my files locally on my Windows, Mac, and Linux machines. I figured that was three copies and an off-site backup. That was an easy-to-live-with, simple backup solution. However, I only had 2TB of files, which Dropbox charged me $137 a year to maintain.

Moving to the UGreen DXP2800 meant accessing all my files from the single NAS drive. It’s cooler and quieter on my computers. However, I had to purchase two large external drives for my Mac and Windows machines that I use to automatically back up the NAS drive daily.

Thus, my initial cost to leave Dropbox was the cost of the DXP2800 and two 16TB Seagate drives for a RAID array ($850), plus $269 (20TB external drive). I already had an old 8TB external drive for the other backup. And if I want an off-site backup, I need to physically take one of my drives to a friend’s house, or pay a backup company $100-200 a year.

And I have more to back up now. I was running Plex on my Mac using a 4TB SSD. Basically, I ripped a movie when needed. Since I got the UGreen DXP2800 and 12TB of space, I’ve been ripping all my movies and TV shows that I own on DVD and Blu-ray. I’ve ripped about half of them, and I figure I’ll use up 8-10TB of my RAID drive space.

I’ve been working for weeks ripping discs. I had no idea we had accumulated so many old movies and TV shows over the last thirty years. Susan and I had gotten tired of using a DVR/BD player, so we shelved all those discs on a neglected bookcase and subscribed to several streaming services.

When I bought the UGreen DXP2800, I thought we could cancel some of our subscriptions. We are viewing our collection via Jellyfin, but we haven’t canceled any streaming services.

I should finish the disc ripping in another couple of weeks. At least I hope. It’s a tedious process. My fantasy is having this wonderful digital library of movies and television shows we love, and we’ll rewatch them for the rest of our lives. I even fantasized about quitting all our streaming services. But I don’t think that will happen.

Looking at what TV shows Susan and I watched during 2025, none were from our library. Susan has started rewatching her old favorite movies. She especially loves to watch her favorite Christmas movies every year. And I have talked her into watching two old TV shows I bought on disc years ago, The Fugitive and Mr. Novak. Both shows premiered in 1963, and neither is on a streaming service.

Lesson #2. It would taken much less effort to just watch the shows on disc. And when I’ve converted them, I will have 10TB of data that I must protect. It’s a huge burden that hangs over my head.

Lesson #3. I tried to save money by using the free MakeMKV program. It works great, but creates large files and is somewhat slow. I eventually spent $40 for WinX DVD Ripper for Mac. It’s faster and creates smaller .mp4 files. However, it doesn’t rip BD discs. I found another Mac program that will, but it will cost another $49. I bought a $39 program for the PC to rip Blu-ray discs, but it was painfully slow. They claimed to have a 90-day money-back guarantee, yet the company ignored my request to return my money. It pisses me off that there are several appealing ripping programs I’d like to try, but they all want their money up front. Most offer a trial that will run a 2-minute test. That’s not enough. I’m happy with WinX DVD Ripper for Mac; I just wish it ripped Blu-rays.

Even then, files that are ripped from Blu-ray movies are huge and take much longer to rip. I’m not sure Blu-ray is worth it.

I tend feel movies and TV shows look better on streaming services. Most people won’t notice. My wife doesn’t see the difference between DVD and BD. For ripping, I prefer DVDs.

Lesson #4. I bought the UGreen NAS even though I wanted a Synology NAS. UGreen just had better hardware. I thought I wanted to get into Docker containers, and UGreen had the hardware for that at the price I wanted to pay. However, setting up Docker containers requires a significant amount of Linux savvy.

I kind of wish I had gotten Synology. It runs many programs natively, so you don’t have to mess with Docker. I hope UGreen will do more of that in the future. I spent days trying to get the YACReader server running. I never succeeded. That was frustrating because I really want it.

There are many services I’d like to run, but I just don’t have the Docker and Linux skills.

Final Thoughts

I’m not sure I would buy a NAS, knowing what I know now. However, if I could figure out how to run programs via Docker, I might go whole hog on NASes. In which case, I would regret getting the 2-drive DXP2800. At first, I thought I’d be good getting two 8TB drives to put into RAID. But I spent more for two 12TB drives, just in case. If I really get into having a home lab, I should have bought the 4-drive DXP4800 Plus.

There are many features I wish UGreen would offer for its software. If all the programs I wanted to run ran natively on the UGreen OS and were easy to use, I think I would love having a NAS.

Setting up file sharing was easy. I got it working on my Mac, Windows, Linux, Android, iPad, and iPhone. However, it’s hard to open files using the UGreen app on iOS and Android. I don’t know why UGreen just can’t make an all-purpose file viewer. Dropbox can open several file types on my iPhone. UGreen expects me to save the file to my iPhone and then view it with an iPhone app. However, I can’t get my iPhone apps to find where the UGreen app saved the file.

That’s why I want the YACReaderLibrary Server running on the DXP2800. I have YACReader running on every device. It can read .pdf, .cdr, .cdz, .jpg, .png, .tiff, and more. Too bad it doesn’t read Word and Excel files too. I think other Linux server apps can handle even more file types. I want my NAS to be a document server.

I’m moving forward with my NAS. If I fail, I’ll regret buying the NAS. Or, I might create a server full of useful apps that I can’t live without. That sounds fun, but it also sounds like it could become a lifelong burden.

JWH

Past-Present-Future As It Relates to Fiction-Nonfiction-Fantasy-SF

by James Wallace Harris, 12/12/25

I’ve been contemplating how robot minds could succeed at explaining reality if they didn’t suffer the errors and hallucinations that current AIs do. Current AI minds evolve from training on massive amounts of words and images created by humans stored as digital files. Computer programs can’t tell fiction from fact based on our language. It’s no wonder they hallucinate. And like humans, they feel they must always have an answer, even if it’s wrong.

What if robots were trained on what they see with their own senses without using human language? Would robots develop their own language that described reality with greater accuracy than humans do with our languages?

Animals interact successfully with reality without language. But we doubt they are sentient in the way we are. But just how good is our awareness of reality if we constantly distort it with hallucinations and delusions? What if robots could develop consciousness that is more accurately self-aware of reality?

Even though we feel like a being inside a body, peering out at reality with five senses, we know that’s not true. Our senses recreate a model of reality that we experience. We enhance that experience with language. However, language is the source of all our delusions and hallucinations.

The primary illusion we all experience is time. We think there is a past, present, and future. There is only now. We remember what was, and imagine what will be, but we do that with language. Unfortunately, language is limited, misleading, and confusing.

Take, for instance, events in the New Testament. Thousands, if not millions, of books have been written on specific events that happened over two thousand years ago. It’s endless speculation trying to describe what happened in a now that no longer exists. Even describing an event that occurred just one year ago is impossible to recreate in words. Yet, we never stop trying.

To compound our delusions is fiction. We love fiction. Most of us spend hours a day consuming fiction—novels, television shows, movies, video games, plays, comics, songs, poetry, manga, fake news, lies, etc. Often, fiction is about recreating past events. Because we can’t accurately describe the past, we constantly create new hallucinations about it.

Then there is fantasy and science fiction. More and more, we love to create stories based on imagination and speculation. Fantasy exists outside of time and space, while science fiction attempts to imagine what the future might be like based on extrapolation and speculation.

My guess is that any robot (or being) that perceives reality without delusions will not use language and have a very different concept of time. Is that even possible? We know animals succeed at this, but we doubt how conscious they are of reality.

Because robots will have senses that take in digital data, they could use playback to replace language. Instead of one robot communicating to another robot, “I saw a rabbit,” they could just transmit a recording of what they saw. Like humans, robots will have to model reality in their heads. Their umwelt will create a sensorium they interact with. Their perception of now, like ours, will be slightly delayed.

However, they could recreate the past by playing a recording that filled their sensorium with old data recordings. The conscious experience would be indistinguishable from using current data. And if they wanted, they could generate data that speculated on the future.

Evidently, all beings, biological or cybernetic, must experience reality as a recreation in their minds. In other words, no entity sees reality directly. We all interact with it in a recreation.

Looking at things this way makes me wonder about consuming fiction. We’re already two layers deep in artificial reality. The first is our sensorium/umwelt, which we feel is reality. And the second is language, which we think explains reality, but doesn’t. Fiction just adds another layer of delusion. Mimetic fiction tries to describe reality, but fantasy and science fiction add yet another layer of delusion.

Humans who practice Zen Buddhism try to tune out all the illusions. However, they talk about a higher state of consciousness called enlightenment. Is that just looking at reality without delusion, or is it a new way of perceiving reality?

Humans claim we are the crown of creation because our minds elevate us over the animals, but is intelligence or consciousness really superior?

We apparently exist in a reality that is constantly evolving. Will consciousness be something reality tries and then abandons? Will robots with artificial intelligence become the next stage in this evolutionary process?

If we’re a failure, why copy us? Shouldn’t we build robots that are superior to us? Right now, AI is created by modeling the processes of our brains. Maybe we should rethink that. But if we build robots that have a higher state of consciousness, couldn’t we also reengineer our brains and create Human Mind 2.0?

What would that involve? We’d have to overcome the limitations of language. We’d also have to find ways to eliminate delusions and hallucinations. Can we consciously choose to do those things?

JWH

Create and Control Your Own Algorithm

by James Wallace Harris, 12/6/25

If you get your news from social media sites, they will feed you what they learn you want to hear. Each site has its own algorithm to help you find the information you prefer. Such algorithms create echo chambers that play to your confirmation bias. It becomes a kind of digital mental masturbation.

Getting information from the internet is like drinking from a firehose. I hate to use such a cliche phrase, but it’s so true. Over the past decade, I’ve tried many ways to manage this flow of information. I’ve used RSS feed readers, news aggregators, social media sites, browser extensions, and smartphone apps. I’m always overwhelmed, and eventually, their algorithms feed me the same shitty content that thrills my baser self.

I’ve recently tried to reduce my information flow by subscribing to just four print magazines: Harper’s, The Atlantic, The New Yorker, and New York Magazine. I’m still deluged with news. However, I’m hoping the magazine editors will intelligently curate the news for me and keep me out of my own echo chamber.

I’ve even tried to limit my news intake to just one significant essay a day. For example, “The Chatbot-Delusion Crisis” by Matteo Wong from The Atlantic was yesterday’s read. Even while trying to control my own algorithm, I’ve been drawn to similar stories lately — about the dangers of social media and AI.

Today’s article, “When Chatbots Break Our Minds,” by Charlie Warzel, features an interview with Kashmir Hill. In the interview, Hill refers to her article in The New York Times, “Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.”

If I could program my own algorithm for news reading, one of the main features I’d hope to create is dazzling myself with news about important things I knew nothing about. I’d call such a feature Black Swan Reporting.

Another essential feature I’d want in my algorithm, I’d call Your Full of Shit. This subroutine would look for essays that show me how wrong or delusional I am. For example, for us liberals, we were deluded in thinking our cherished ideals made most Americans happy.

Another useful feature would be Significant News Outside the United States. For example, I listened to a long news story in one of my magazines about how Australia will soon enact a law that bans children under 16 from having social media accounts. This is a significant social experiment I hadn’t heard about, and one that other countries will try in 2026. None of my social media feeds let me know, but then maybe they want to keep such experiments secret.

Mostly, I’d want my algorithm to show me Important Things I Don’t Know, which is the exact opposite of what social media algorithms do.

However, I might need to go beyond one article a day to keep up with current events. That risks turning up the feed to fire hose velocity. How much news do we really need? I’m willing to give up an hour a day to one significant news story that’s educational and enlightening. I might be willing to give up another hour for several lighter but useful stories about reality.

I hate to admit it, but I doom scroll YouTube and Facebook one to two hours a day because of idle moments like resting after working in the yard or waking up in the middle of the night. And their algorithms have zeroed in on my favorite distractions, ones that are so shallow that I’m embarrassed to admit what they are.

The whole idea of creating a news algorithm driven by self-awareness is rather daunting. But I think we need to try. I’m reading too many stories about how we’re all damned by social media and AI.

I’m anxious to hear what kids in Australia do. Will they go outside and play, or will they find other things on their smartphones to occupy their time? What if the Australian government is forcing a generation to just play video games and look at porn?

JWH

Are Podcasts Wasting Our Time?

by James Wallace Harris, 11/16/25

While listening to the Radio Atlantic podcast, “What If AI Is a Bubble?,” a conversation between host Hanna Rosin and guest Charlie Warzel, I kept thinking I had heard this information before. I checked and found that I had read “Here’s How the AI Crash Happens” by Matteo Wong and Charlie Warzel, which Rosin had mentioned in her introduction.

Over the past year, I’ve been paying attention to how podcasts differ from long-form journalism. I’ve become disappointed with talking heads. I know podcasts are popular now, and I can understand their appeal. But I no longer have the patience for long chats, especially ones that spend too much time not covering the topic. All too often, podcasts take up excessive time for the amount of real information they cover.

What I’ve noticed is that the information density between podcasts and long-form journalism is very different. Here’s a quote, five paragraphs from the podcast:

WarzelThere’s a recent McKinsey report that’s been sort of passed around in these spheres where people are talking about this that said 80 percent of the companies they surveyed that were using AI discovered that the technology had no real—they said “significant”—impact on their bottom line, right?

So there’s this notion that these tools are not yet, at least as they exist now, as transformative as people are saying—and especially as transformative for productivity and efficiency and the stuff that leads to higher revenues. But there’s also these other reasons.

The AI boom, in a lot of ways, is a data-center boom. For this technology to grow, for it to get more powerful, for it to serve people better, it needs to have these data centers, which help the large language models process faster, which help them train better. And these data centers are these big warehouses that have to be built, right? There’s tons of square footage. They take a lot of electricity to run.

But one of the problems is with this is it’s incredibly money-intensive to build these, right? They’re spending tons of money to build out these data centers. So there’s this notion that there’s never enough, right? We’re going to need to keep building data centers. We’re going to need to increase the amount of power, right? And so what you have, basically, is this really interesting infrastructure problem, on top of what we’re thinking of as a technological problem.

And that’s a bit of the reason why people are concerned about the bubble, because it’s not just like we need a bunch of smart people in a room to push the boundaries of this technology, or we need to put a lot of money into software development. This is almost like reverse terraforming the Earth. We need to blanket the Earth in these data centers in order to make this go.

Contrast that with the opening five paragraphs of the article:

The AI boom is visible from orbit. Satellite photos of New Carlisle, Indiana, show greenish splotches of farmland transformed into unmistakable industrial parks in less than a year’s time. There are seven rectangular data centers there, with 23 more on the way.

Inside each of these buildings, endless rows of fridge-size containers of computer chips wheeze and grunt as they perform mathematical operations at an unfathomable scale. The buildings belong to Amazon and are being used by Anthropic, a leading AI firm, to train and run its models. According to one estimate, this data-center campus, far from complete, already demands more than 500 megawatts of electricity to power these calculations—as much as hundreds of thousands of American homes. When all the data centers in New Carlisle are built, they will demand more power than two Atlantas.

The amount of energy and money being poured into AI is breathtaking. Global spending on the technology is projected to hit $375 billion by the end of the year and half a trillion dollars in 2026. Three-quarters of gains in the S&P 500 since the launch of ChatGPT came from AI-related stocks; the value of every publicly traded company has, in a sense, been buoyed by an AI-driven bull market. To cement the point, Nvidia, a maker of the advanced computer chips underlying the AI boom, yesterday became the first company in history to be worth $5 trillion.

Here’s another way of thinking about the transformation under way: Multiplying Ford’s current market cap 94 times over wouldn’t quite get you to Nvidia’s. Yet 20 years ago, Ford was worth nearly triple what Nvidia was. Much like how Saudi Arabia is a petrostate, the U.S. is a burgeoning AI state—and, in particular, an Nvidia-state. The number keeps going up, which has a buoying effect on markets that is, in the short term, good. But every good earnings report further entrenches Nvidia as a precariously placed, load-bearing piece of the global economy.

America appears to be, at the moment, in a sort of benevolent hostage situation. AI-related spending now contributes more to the nation’s GDP growth than all consumer spending combined, and by another calculation, those AI expenditures accounted for 92 percent of GDP growth during the first half of 2025. Since the launch of ChatGPT, in late 2022, the tech industry has gone from making up 22 percent of the value in the S&P 500 to roughly one-third. Just yesterday, Meta, Microsoft, and Alphabet all reported substantial quarterly-revenue growth, and Reuters reported that OpenAI is planning to go public perhaps as soon as next year at a value of up to $1 trillion—which would be one of the largest IPOs in history. (An OpenAI spokesperson told Reuters, “An IPO is not our focus, so we could not possibly have set a date”; OpenAI and The Atlantic have a corporate partnership.)

Admittedly, the paragraphs in the article are somewhat longer, but judge them on the amount of facts each presents.

Some people might say podcasts are more convenient. But I listened to the article. I’ve been subscribing to Apple News+ for a while now. I really didn’t use it daily until I discovered the audio feature. And it didn’t become significant until I began hearing major articles from The New Yorker, The Atlantic, and New York Magazine.

Whenever I listened to a podcast, including podcasts from those magazines, I was generally disappointed with their impact. Conversational speech just can’t compete with the rich informational density of a well-written essay. And once I got used to long-form journalism, the information I got from the internet and television seemed so damn insubstantial.

These magazines have spoiled me. I’m even disappointed with their short-form content. Over my lifetime, I’ve watched magazines fill their pages with shorter and shorter content. Interesting tidbits came to magazines long before the internet appealed to our ever-shortening attention spans.

As an experiment, I ask you to start paying attention to the length of the content you consume. Analyze the information density of what you read, either with your eyes or ears. Pay attention to the words that have the greatest impact. Notice what percentage of a piece is opinion and what percentage is reported facts. How are the facts presented? Is a source given? And when you look back, either from a day or a week, how much do you remember?

What do you think when you read or hear:

According to one estimate, this data-center campus, far from complete, already demands more than 500 megawatts of electricity to power these calculations—as much as hundreds of thousands of American homes. When all the data centers in New Carlisle are built, they will demand more power than two Atlantas.

Don’t you want to know more? Where did those facts come from? Are they accurate? Another measure of content is whether it makes you want to know more. The article above drove my curiosity to insane levels. That’s when I found this YouTube video. Seeing is believing. But judging videos is another issue, but that’s for another time.

JWH