Why I Deleted Facebook and Twenty Other Apps from My iPhone

by James Wallace Harris, 4/21/24

Lately, I’ve been encountering numerous warnings on the dangers of the internet and smartphones. Jonathan Haidt is promoting his new book The Anxious Generation. Even though it’s about how there’s increase mental illness in young girls using smartphones, I think it might tangentially apply to an old guy like me too.

Haidt was inspired to write his book because of reports about the sharp rise in mental illness in young people since 2010. That was just after the invention of the iPhone and the beginnings of social media apps. Recent studies show a correlation between the use of social media on smartphones and the increase reports of mental illness in young girls. I’m not part of Haidt’s anxious generation, but I do wonder if the internet, social media, and smartphones are affecting us old folks too.

Johann Hari’s book, Stolen Focus, is about losing our ability to pay attention, which does affect me. I know I have a focusing problem. I can’t apply myself like I used to. For years, I’ve been thinking it was because I was getting old. Now I wonder if it’s not the internet and smartphones. Give me an iPhone and a La-Z-Boy and I’m a happy geezer but not a productive one.

So, I’ve decided to test myself. I deleted Facebook and about twenty other apps from my iPhone. All the ones that keep me playing on my phone rather than doing something else. I didn’t quit Facebook, or other social media accounts, just deleted the apps off my phone. I figure if I need to use them, I’ll have to get my fat ass out of my La-Z-Boy and go sit upright at my desktop computer.

This little experiment has had an immediate impact — withdrawal symptoms. Without Facebook, YouTube, and all the other apps I kept playing with all day long, I sit in my La-Z-Boy thinking, “What can I do?” I rationalized that reading the news is good, but then I realized that I had way too many news apps. With some trepidation, I deleted The Washington Post, Ground News, Feedly, Reddit, Instapaper, and other apps, except for The New York Times and Apple News+.

I had already deleted Flipboard because it was one huge clickbait trap, but couldn’t that also be true of other news apps? They all demand our attention. When does keeping current turn into a news addiction? What is the minimum daily requirement of news to stay healthy and informed? What amount constitutes news obesity?

I keep picking up my iPhone wanting to do something with it, but there’s less and less to do. I kept The New York Times games app. I play Mini Crossword, Wordle, Connections, and Sudoku every morning. For now, I’m rationalizing that playing those games is exercise for my brain. They only take about 20-30 minutes total. And I can’t think of any non-computer alternatives.

I still use my iPhone for texting, phoning, music streaming, audiobooks, checking the weather, looking up facts, reading Kindle books, etc. The iPhone has become the greatest Swiss Army knife of useful tools ever invented. I don’t think I could ever give it up. Whenever the power goes out, Susan and I go through withdrawal anxiety. Sure, we miss electricity, heating, and cooling, but what we miss the most is streaming TV and the internet. We’ve experienced several three-day outages, and it bugs us more than I think it should.

One of the insights Jonathan Haidt provides is his story about asking groups of parents two questions?

  1. At what age were you allowed to go off alone unsupervised as a child?
  2. At what age did you let your children go off unsupervised?

The parents would generally say 5-7 for themselves, for 10-12 for their children. Kids today are overprotected, and smartphones let them retreat from the world even further. Which makes me ask: Am I retreating from the world when I use my smartphone or computer? Has the iPhone become like a helicopter parent that keeps me tied to its apron strings?

That’s a hard question to answer. Isn’t retiring a kind of retreat from the world? Doesn’t getting old make us pull back too? My sister offered a funny observation about life years ago, “We start off life in a bed in a room by ourselves with someone taking care of us, and we end up in bed in a room by ourselves with someone taking care of us.” Isn’t screen addiction only hurrying us towards that end? And will we die with our smartphones clutched tightly in our gnarled old fingers?

Is reading a hardback book any less real than reading the same book on my iPhone screen, or listening to it with earbuds and an iPhone? With the earbuds I can walk, work in the yard, or wash dishes while reading. Is reading The Atlantic from a printed magazine a superior experience than reading it on my iPhone with Apple News+?

Is looking at funny videos less of a life experience than playing with my cat or walking in the botanic gardens?

Haidt ends up advising parents to only allow children under sixteen to own a flip phone. He would prefer kids wait even longer to get a smartphone till they complete normal adolescent development, but he doesn’t think that will happen. I don’t think kids will ever go back to flip phones. The other day I noticed that one of the apps I had was recommended for age 4+ the App Store.

Are retired folks missing any kind of elder years of psychological development because we use smartphones? As a bookworm with a lifelong addiction to television and recorded music, how can I even know what a normal life would be like? I’m obviously not a hunter and gatherer human, or an agrarian human, or even a human adapted to industrialization. Is white collar work the new natural? Didn’t we live in nature too long ago for it to be natural anymore?

Aren’t we quickly adapting to a new hivemind way of living? Are the warnings pundits give about smartphones just identifying the side effects of evolving into a new human social structure? Is cyberization the new phase of humanity?

There were people who protested industrialization, but we didn’t reject it. Should we have? Now that there are people rejecting the hivemind, should we reject it too? Or jump in faster?

For days now I’ve been restless without my apps. I have been more active. I seeded my front lawn with mini clover and have been watering and watching it come in. I contracted to have our old bathtub replaced with a shower so it will be safer for Susan. I’ve been working with a bookseller to sell my old science fiction magazines. And I’ve been trying to walk more. However, I’ve yet to do the things I hoped to do when I decided to give up my apps.

It’s hard to tell the cause of doing less later in life. Is it aging? Is it endless distractions? Is it losing the discipline of work after retiring? Before giving up all my apps, I would recline in my La-Z-Boy and play on my iPhone regretting I wasn’t doing anything constructive. Now I sit in my La-Z-Boy doing nothing and wonder why I’m not doing anything constructive. I guess it’s taken a long time to get this lazy, so it might take just as long to overcome that laziness.

JWH

A Painful Challenge to My Ego

James Wallace Harris, 2/26/24

I’m hitting a new cognitive barrier that stops me cold. It’s making me doubt myself. I’ve been watching several YouTubers report on the latest news in artificial intelligence and I’ve been amazed by their ability to understand and summarize a great amount complex information. I want to understand the same information and summarize it too, but I can’t. Struggling to do so wounds my ego.

This experience is forcing me to contemplate my decaying cognitive abilities. I had a similar shock ten years ago when I retired. I was sixty-two and training a woman in her twenties to take over my job. She blew my mind by absorbing the information I gave her as fast as I could tell her. One reason I chose to retire early is because I couldn’t learn the new programming language, framework, and IDE that our IT department was making standard. That young woman was learning my servers and old programs in a language she didn’t know at a speed that shocked and awed me. My ego figured something was up, even then, when it was obvious this young woman could think several times faster than I could. I realized that’s what getting old meant.

I feel like a little aquarium fish that keeps bumping into an invisible barrier. My Zen realization is I’ve been put in a smaller tank. I need to map the territory and learn how to live with my new limitations. Of course, my ego still wants to maximize what I can do within those limits.

I remember as my mother got older, my sister and I had to decide when and where she could drive because she wouldn’t limit herself for her own safety. Eventually, my sister and I had to take her car away. I’m starting to realize that I can’t write about certain ideas because I can’t comprehend them. Will I always have the self-awareness to know what I can comprehend and what I can’t?

This makes me think of Joe Biden and Donald Trump. Both are older than I am. Does Biden realize what he’s forgotten? Does Trump even understand he can’t possibly know everything he thinks he knows? Neither guy wants to give up because of their egos.

So, what am I not seeing about myself? I’m reminded of Charlie Gordon in the story “Flowers for Algernon,” when Charlie was in his intellectual decline phase.

Are there tools we could use to measure our own decline? Well, that’s a topic for another essay, but I believe blogging might be one such tool.

JWH

ChatGPT Isn’t an Artificial Intelligence (AI) But an Artificial Unconsciousness (AU)

by James Wallace Harris, 2/12/24

This essay is for anyone who wants to understand themselves and how creativity works. What I’m about to say will make more sense if you’ve played with ChatGPT or have some understanding of recent AI programs in the news. Those programs appear to be amazingly creative by answering ordinary questions, passing tests that lawyers, mathematicians, and doctors take, generating poems and pictures, and even creating music and videos. They often appear to have human intelligence even though they are criticized for making stupid mistakes — but then so do humans.

We generally think of our unconscious minds as mental processes occurring automatically below the surface of our conscious minds, out of our control. We believe our unconscious minds are neural functions that influence thought, feelings, desires, skills, perceptions, and reactions. Personally, I assume feelings, emotions, and desires come from an even deeper place and are based on hormones and are unrelated to unconscious intelligence.

It occurred to me that ChatGPT and other large language models are analogs for the unconscious mind, and this made me observe my own thoughts more closely. I don’t believe in free will. I don’t even believe I’m writing this essay. The keyword here is “I” and how we use it. If we use “I” to refer to our whole mind and body, then I’m writing the essay. But if we think of the “I” as the observer of reality that comes into being when I’m awake, then probably not. You might object to this strongly because our sense of I-ness feels obviously in full control of the whole shebang.

But what if our unconscious minds are like AI programs, what would that mean? Those AI programs train on billions of pieces of data, taking a long time to learn. But then, don’t children do something similar? The AI programs work by prompting it with a question. If you play a game of Wordle, aren’t you prompting your unconscious mind? Could you write a step-by-step flow chart of how you solve a Wordle game consciously? Don’t your hunches just pop into your mind?

If our unconscious minds are like ChatGPT, then we can improve them by feeding in more data and giving it better prompts. Isn’t that what we do when studying and taking tests? Computer scientists are working hard to improve their AI models. They give their models more data and refine their prompts. If they want their model to write computer programs, they train their models in more computer languages and programs. If we want to become an architect, we train our minds with data related to architecture. (I must wonder about my unconscious mind; it’s been trained on decades of reading science fiction.)

This will also explain why you can’t easily change another person’s mind. Training takes a long time. The unconscious mind doesn’t respond to immediate logic. If you’ve trained your mental model all your life on The Bible or investing money, it won’t be influenced immediately by new facts regarding science or economics.

We live by the illusion that we’re teaching the “I” function of our mind, the observer, the watcher, but what we’re really doing is training our unconscious mind like computer scientists train their AI models. We might even fool ourselves that free will exists because we believe the “I” is choosing the data and prompts. But is that true? What if the unconscious mind tells the “I” what to study? What to create? If the observer exists separate from intelligence, then we don’t have free will. But how could ChatGPT have free will? Humans created it, deciding on the training data, and the prompts. Are our unconscious minds creating artificial unconscious minds? Maybe nothing has free will, and everything is interrelated.

If you’ve ever practiced meditation, you’ll know that you can watch your thoughts. Proof that the observer is separate from thinking. Twice in my life I’ve lost the ability to use words and language, once in 1970 because of a large dose of LSD, and about a decade ago with a TIA. In both events I observed the world around me without words coming to mind. I just looked at things and acted on conditioned reflexes. That let me experience a state of consciousness with low intelligence, one like animals know. I now wonder if I was cut off from my unconscious mind. And if that’s true, it implies language and thoughts come from the unconscious minds, and not from what we call conscious awareness. That the observer and intelligence are separate functions of the mind.

We can get ChatGPT to write an essay for us, and it has no awareness of its actions. We use our senses to create a virtual reality in our head, an umwelt, which gives us a sensation that we’re observing reality and interacting with it, but we’re really interacting with a model of reality. I call this function that observes our model of reality the watcher. But what if our thoughts are separate from this viewer, this watcher?

If we think of large language models as analogs for the unconscious mind, then everything we do in daily life is training for our mental model. Then does the conscious mind stand in for the prompt creator? I’m on the fence about this. Sometimes the unconscious mind generates its own prompts, sometimes prompts are pushed onto us from everyday life, but maybe, just maybe, we occasionally prompt our unconscious mind consciously. Would that be free will?

When I write an essay, I have a brain function that works like ChatGPT. It generates text but as it comes into my conscious mind it feels like I, the viewer, created it. That’s an illusion. The watcher takes credit.

Over the past year or two I’ve noticed that my dreams are acquiring the elements of fiction writing. I think that’s because I’ve been working harder at understanding fiction. Like ChatGPT, we’re always training our mental model.

Last night I dreamed a murder mystery involving killing someone with nitrogen. For years I’ve heard about people committing suicide with nitrogen, and then a few weeks ago Alabama executed a man using nitrogen. My wife and I have been watching two episodes of Perry Mason each evening before bed. I think the ChatGPT feature in my brain took all that in and generated that dream.

I have a condition called aphantasia, that means I don’t consciously create mental pictures. However, I do create imagery in dreams, and sometimes when I’m drowsy, imagery, and even dream fragments float into my conscious mind. It’s like my unconscious mind is leaking into the conscious mind. I know these images and thoughts aren’t part of conscious thinking. But the watcher can observe them.

If you’ve ever played with the AI program Midjourney that creates artistic images, you know that it often creates weirdness, like three-armed people, or hands with seven fingers. Dreams often have such mistakes.

When AIs produce fictional results, the computer scientists say the AI is hallucinating. If you pay close attention to people, you’ll know we all live by many delusions. I believe programs like ChatGPT mimic humans in more ways than we expected.

I don’t think science is anywhere close to explaining how the brain produces the observer, that sense of I-ness, but science is getting much closer to understanding how intelligence works. Computer scientists say they aren’t there yet, and plan for AGI, or artificial general intelligence. They keep moving the goal. What they really want are computers much smarter than humans that don’t make mistakes, which don’t hallucinate. I don’t know if computer scientists care if computers have awareness like our internal watchers, that sense of I-ness. Sentient computers are something different.

I think what they’ve discovered is intelligence isn’t conscious. If you talk to famous artists, writers, and musicians, they will often talk about their muses. They’ve known for centuries their creativity isn’t conscious.

All this makes me think about changing how I train my model. What if I stopped reading science fiction and only read nonfiction? What if I cut out all forms of fiction including television and movies? Would it change my personality? Would I choose different prompts seeking different forms of output? If I do, wouldn’t that be my unconscious mind prompting me to do so?

This makes me ask: If I watched only Fox News would I become a Trump supporter? How long would it take? Back in the Sixties there was a catch phrase, “You are what you eat.” Then I learned a computer acronym, GIGO — “Garbage In, Garbage Out.” Could we say free will exists if we control the data, we use train our unconscious minds?

JWH

I’m Too Dumb to Use Artificial Intelligence

by James Wallace Harris, 1/19/24

I haven’t done any programming since I retired. Before I retired, I assumed I’d do programming for fun, but I never found a reason to write a program over the last ten years. Then, this week, I saw a YouTube video about PrivateGPT that would allow me to train an AI to read my own documents (.pdf, docx, txt, epub). At the time I was researching Philip K. Dick, and I was overwhelmed by the amount of content I was finding about the writer. So, this light bulb went off in my head. Why not use AI to help me read and research Philip K. Dick. I really wanted to feed the six volumes of collected letters of PKD to the AI so I could query it.

PrivateGPT is free. All I had to do was install it. I’ve spent days trying to install the dang program. The common wisdom is Python is the easiest programming language to learn right now. That might be true. But installing a Python program with all its libraries and dependencies is a nightmare. What I quickly learned is distributing and installing a Python program is an endless dumpster fire. I have Anaconda, Python 3.11, Visual Studio Code, Git, Docker, Pip, installed on three computers, Windows, Mac, and Linux, and I’ve yet to get anything to work consistently. I haven’t even gotten to part where I’d need the Poetry tool. I can run Python code under plain Python and Anaconda and set up virtual environments on each. But I can’t get VS Code to recognize those virtual environments no matter what I do.

Now I don’t need VS Code at all, but it’s so nice and universal that I felt I must get it going. VS Code is so cool looking, and it feels like it could control a jumbo jet. I’ve spent hours trying to get it working with the custom environments Conda created. There’s just some conceptual configuration I’m missing. I’ve tried it on Windows, Mac, and Linux just in case it’s a messed-up configuration on a particular machine. But they all fail in the same way.

I decided I needed to give up on using VS Code with Conda commands. If I continue, I’ll just use the Anaconda prompt terminal on Windows, or the terminal on Mac or Linux.

However, after days of banging my head against a wall so I could use AI might have taught me something. Whenever I think of creating a program, I think of something that will help me organize my thoughts and research what I read. I might end up spending a year just to get PrivateGPT trained on reading and understanding articles and dissertations on Philip K. Dick. Maybe it would be easier if I just read and processed the documents myself. I thought an AI would save me time, but it requires learning a whole new specialization. And if I did that, I might just end up becoming a programmer again, rather than an essayist.

This got me thinking about a minimalistic programming paradigm. This was partly inspired by seeing the video “The Unreasonable Effectiveness of Plain Text.”

Basically, this video advocates doing everything in plain text, and using the Markdown format. That’s the default format of Obsidian, a note taking program.

It might save me lot of time if I just read the six volumes of PKD’s letters and take notes over trying to teach a computer how to read those volumes and understand my queries. I’m not even sure I could train PrivateGPT to become a literary researcher.

Visual Studio Code is loved because it does so much for the programmer. It’s full of artificial intelligence. And more AI is being added every day. Plus, it’s supposed to work with other brilliant programming tools. But using those tools and getting them to cooperate with each other is befuddling my brain.

This frustrating week has shown me I’m not smart enough to use smart tools. This reminds me of a classic science fiction short story by Poul Anderson, “The Man Who Came Early.” It’s about a 20th century man who thrown back in time to the Vikings, around the year 1000 AD. He thinks he will be useful to the people of that time because he can invent all kinds of marvels. What he learns is he doesn’t even know how to make the tools, in which to make the tools, that made the tools he was used to in the 20th century.

I can use a basic text editor and compiler, but my aging brain just can’t handle more advance modern programming tools, especially if they’re full of AI.

I need to solve my data processing needs with basic tools. But I also realized something else. My real goal was to process information about Philip K. Dick and write a summarizing essay. Even if I took a year and wrote an AI essay writing program, it would only teach me a whole lot about programming, and not about Philip K. Dick or writing essays.

What I really want is for me to be more intelligent, not my computer.

JWH

Are Computers Making It Too Easy for Us?

by James Wallace Harris, 11/24/23

Last night I watched two videos on YouTube that reviewed the Seestar S50 “smart telescope.” It’s an amazing $499 go-to telescope that does astrophotography automatically. It works in conjunction with your smartphone. You take the telescope outside and set it up level, then use your smartphone to tell it what astronomical object to photograph, and it does everything else. You can go back inside and monitor the Seestar S50 by smartphone.

But does it make astrophotography too easy? The reviewer mentions that question and says no. But I know if I bought the Seestar S50 I would play with it a couple of time and then leave it in a closet. (Unless I felt challenged to find ways to push the device to its limits.)

A couple of decades ago I wanted to get into digital astrophotography. I even bought a $60 how-to book. At the time, it was both too expensive and too difficult for me. The learning curve was extremely high. I had a 120mm cheap refractor that was fun to look through, but a bitch to carry around and set up. And it didn’t have the mount to handle photography. And except for the Moon, Venus, Mars, Jupiter, and Saturn, I needed to drive an hour out of town to the astronomy club’s viewing site to see deep sky stuff. I eventually gave my telescope to a lady who wanted to get into astronomy. I lost interest in what I could see with just by eyes. The next step was photography, and it was too big to take at the time.

After I retired and have gotten older, I’ve thought about getting another telescope, but a smaller one. After having hernia surgery, I don’t want to risk picking up heavy stuff. The Seestar S50 would be light enough, and cheap enough. And it takes better photographs than what I fantasized about doing twenty years ago.

Astronomy is a deceptive hobby. You see the great astrophotography in Sky & Telescope and think that’s what you’ll see when you look through a telescope. It’s not. Even with expensive scopes, deep sky objects are just patches of fuzzy gray blobs of lights in the eyepiece. Cameras, both film and digital gather greater amounts of light by making time exposures, sometimes hours long. What the Seestar S50 does is take a series of ten second exposures that build up the image over time. The longer you spend photographing an object the better it looks. Watch both videos to see what I mean.

What you see on your smartphone using the Seestar S50 is way more than what you see looking through an eyepiece. And real astronomers seldom look through eyepieces. However, is looking at your iPhone really what you want?

In this second film, we see how traditional digital astrophotography is done. It involves a lot of equipment and software. It’s a skill that takes time to master but look at the results. (Watch the entire video here but the results in the video below are stunning.)

The 80mm APO looks so good I can’t help but think the guy is fooling us with a photo from the Hubble telescope.

What is the goal here? To have a photograph of something in the sky you claim to have taken? The Seestar S50 will do that. But what did you do? Paid $499. Isn’t the real goal to learn how to take an astrophotograph by learning how it’s done? Doesn’t it also involve the desire to know how to find objects in the night sky? Isn’t what we really want is knowing how to do something, and do it well?

Computers are starting to do everything for us. And by adding AI, it will soon be possible to do a lot of complex tasks by just asking a computer. People now create beautiful digital art by assembling keywords into a prompt.

I know it’s impossible to turn back progress. I wouldn’t want to give up computers, but I’m not sure I want computers to do everything for me. Of course, everyone is different. Some people will be happy to have a computer do the entire job, while other people will take pleasure in doing something entirely by themselves. I don’t mind using a computer with word processing to write an essay, but I wouldn’t want the computer to write the essay for me.

I’m already seeing people give up their smartphones for dumb phones. I know people who have taken up drawing, painting, or water coloring by hand rather than use a computer art program.

I wonder if society will eventually reject computers. AI might push us over the limit. We could draw the limit at AI. Or we could draw the limit at an earlier stage of computer development. What if we gave up the internet too? Or set the clock back to 1983 before the Macintosh made graphical interfaces what everyone wanted. What if we limited computer technology to IBM AT personal computers, IBM 370 mainframes, and VAX 11 minicomputers? Humans had to work harder and know more to use that level of technology. But wasn’t using those old machines a lot more fun?

I don’t think we would turn off technological progress. I expect a Seestar S80 that does everything that guy could do with his $5000 computer for $399 in a few years and be even easier to use. And in ten years people will have robots with eyes like telescopes, and if you want a photography of M31, you’d just say to your robot, “Robbie, go take a picture of M31 for me.”

JWH