I Gleaned Two Useful Bits of Wisdom from YouTube This Morning

by James Wallace Harris, 3/18/24

The first insight applies to internet addiction. I constantly check several apps on my iPhone all day, and regularly browse YouTube on my television. It’s gotten to be a terrible habit, even though it’s so satisfying.

The first video made an analogy to rats and internet use. If you provide a button to a caged rat that when pressed provides a food pellet, the rat will eat its fill and then stop pressing the button. But if you set the button to randomly provide a food pellet the rat will constantly push the button. The analogy is we constantly check the internet hoping to get a reward, but because we don’t always find something rewarding, we keep checking. I believe that describes my internet habit.

I’m going to take his advice and set a limited time to enjoy browsing. But for the other times I’ll only use the internet when I know I want something specific.

The second piece of advice is about To-Do lists. The guy on the video said if your To-List is too long, you’ll avoid using it. And that’s true for me. I use the same To-Do list app he uses, Todoist. So, I went and rescheduled most of my tasks for the future, and just left five on the main page. I might even reduce it to three. Or even one. I want to try extremely hard and get more things done, even if it’s only one thing a day.

It’s ironic that I found these two insights that are perfect for me by browsing. I think it’s important to do some internet browsing, but I was like a rat in a cage always pushing the button hoping that I’d get a reward. There’s just not that many truly significant rewards to be had on the internet every day.

I hope I can apply these two insights and stick to using them. I might even add them to my habit tracker. Since I started using it, I’ve been doing seven core habits for 151 days straight.

JWH

A Painful Challenge to My Ego

James Wallace Harris, 2/26/24

I’m hitting a new cognitive barrier that stops me cold. It’s making me doubt myself. I’ve been watching several YouTubers report on the latest news in artificial intelligence and I’ve been amazed by their ability to understand and summarize a great amount complex information. I want to understand the same information and summarize it too, but I can’t. Struggling to do so wounds my ego.

This experience is forcing me to contemplate my decaying cognitive abilities. I had a similar shock ten years ago when I retired. I was sixty-two and training a woman in her twenties to take over my job. She blew my mind by absorbing the information I gave her as fast as I could tell her. One reason I chose to retire early is because I couldn’t learn the new programming language, framework, and IDE that our IT department was making standard. That young woman was learning my servers and old programs in a language she didn’t know at a speed that shocked and awed me. My ego figured something was up, even then, when it was obvious this young woman could think several times faster than I could. I realized that’s what getting old meant.

I feel like a little aquarium fish that keeps bumping into an invisible barrier. My Zen realization is I’ve been put in a smaller tank. I need to map the territory and learn how to live with my new limitations. Of course, my ego still wants to maximize what I can do within those limits.

I remember as my mother got older, my sister and I had to decide when and where she could drive because she wouldn’t limit herself for her own safety. Eventually, my sister and I had to take her car away. I’m starting to realize that I can’t write about certain ideas because I can’t comprehend them. Will I always have the self-awareness to know what I can comprehend and what I can’t?

This makes me think of Joe Biden and Donald Trump. Both are older than I am. Does Biden realize what he’s forgotten? Does Trump even understand he can’t possibly know everything he thinks he knows? Neither guy wants to give up because of their egos.

So, what am I not seeing about myself? I’m reminded of Charlie Gordon in the story “Flowers for Algernon,” when Charlie was in his intellectual decline phase.

Are there tools we could use to measure our own decline? Well, that’s a topic for another essay, but I believe blogging might be one such tool.

JWH

ChatGPT Isn’t an Artificial Intelligence (AI) But an Artificial Unconsciousness (AU)

by James Wallace Harris, 2/12/24

This essay is for anyone who wants to understand themselves and how creativity works. What I’m about to say will make more sense if you’ve played with ChatGPT or have some understanding of recent AI programs in the news. Those programs appear to be amazingly creative by answering ordinary questions, passing tests that lawyers, mathematicians, and doctors take, generating poems and pictures, and even creating music and videos. They often appear to have human intelligence even though they are criticized for making stupid mistakes — but then so do humans.

We generally think of our unconscious minds as mental processes occurring automatically below the surface of our conscious minds, out of our control. We believe our unconscious minds are neural functions that influence thought, feelings, desires, skills, perceptions, and reactions. Personally, I assume feelings, emotions, and desires come from an even deeper place and are based on hormones and are unrelated to unconscious intelligence.

It occurred to me that ChatGPT and other large language models are analogs for the unconscious mind, and this made me observe my own thoughts more closely. I don’t believe in free will. I don’t even believe I’m writing this essay. The keyword here is “I” and how we use it. If we use “I” to refer to our whole mind and body, then I’m writing the essay. But if we think of the “I” as the observer of reality that comes into being when I’m awake, then probably not. You might object to this strongly because our sense of I-ness feels obviously in full control of the whole shebang.

But what if our unconscious minds are like AI programs, what would that mean? Those AI programs train on billions of pieces of data, taking a long time to learn. But then, don’t children do something similar? The AI programs work by prompting it with a question. If you play a game of Wordle, aren’t you prompting your unconscious mind? Could you write a step-by-step flow chart of how you solve a Wordle game consciously? Don’t your hunches just pop into your mind?

If our unconscious minds are like ChatGPT, then we can improve them by feeding in more data and giving it better prompts. Isn’t that what we do when studying and taking tests? Computer scientists are working hard to improve their AI models. They give their models more data and refine their prompts. If they want their model to write computer programs, they train their models in more computer languages and programs. If we want to become an architect, we train our minds with data related to architecture. (I must wonder about my unconscious mind; it’s been trained on decades of reading science fiction.)

This will also explain why you can’t easily change another person’s mind. Training takes a long time. The unconscious mind doesn’t respond to immediate logic. If you’ve trained your mental model all your life on The Bible or investing money, it won’t be influenced immediately by new facts regarding science or economics.

We live by the illusion that we’re teaching the “I” function of our mind, the observer, the watcher, but what we’re really doing is training our unconscious mind like computer scientists train their AI models. We might even fool ourselves that free will exists because we believe the “I” is choosing the data and prompts. But is that true? What if the unconscious mind tells the “I” what to study? What to create? If the observer exists separate from intelligence, then we don’t have free will. But how could ChatGPT have free will? Humans created it, deciding on the training data, and the prompts. Are our unconscious minds creating artificial unconscious minds? Maybe nothing has free will, and everything is interrelated.

If you’ve ever practiced meditation, you’ll know that you can watch your thoughts. Proof that the observer is separate from thinking. Twice in my life I’ve lost the ability to use words and language, once in 1970 because of a large dose of LSD, and about a decade ago with a TIA. In both events I observed the world around me without words coming to mind. I just looked at things and acted on conditioned reflexes. That let me experience a state of consciousness with low intelligence, one like animals know. I now wonder if I was cut off from my unconscious mind. And if that’s true, it implies language and thoughts come from the unconscious minds, and not from what we call conscious awareness. That the observer and intelligence are separate functions of the mind.

We can get ChatGPT to write an essay for us, and it has no awareness of its actions. We use our senses to create a virtual reality in our head, an umwelt, which gives us a sensation that we’re observing reality and interacting with it, but we’re really interacting with a model of reality. I call this function that observes our model of reality the watcher. But what if our thoughts are separate from this viewer, this watcher?

If we think of large language models as analogs for the unconscious mind, then everything we do in daily life is training for our mental model. Then does the conscious mind stand in for the prompt creator? I’m on the fence about this. Sometimes the unconscious mind generates its own prompts, sometimes prompts are pushed onto us from everyday life, but maybe, just maybe, we occasionally prompt our unconscious mind consciously. Would that be free will?

When I write an essay, I have a brain function that works like ChatGPT. It generates text but as it comes into my conscious mind it feels like I, the viewer, created it. That’s an illusion. The watcher takes credit.

Over the past year or two I’ve noticed that my dreams are acquiring the elements of fiction writing. I think that’s because I’ve been working harder at understanding fiction. Like ChatGPT, we’re always training our mental model.

Last night I dreamed a murder mystery involving killing someone with nitrogen. For years I’ve heard about people committing suicide with nitrogen, and then a few weeks ago Alabama executed a man using nitrogen. My wife and I have been watching two episodes of Perry Mason each evening before bed. I think the ChatGPT feature in my brain took all that in and generated that dream.

I have a condition called aphantasia, that means I don’t consciously create mental pictures. However, I do create imagery in dreams, and sometimes when I’m drowsy, imagery, and even dream fragments float into my conscious mind. It’s like my unconscious mind is leaking into the conscious mind. I know these images and thoughts aren’t part of conscious thinking. But the watcher can observe them.

If you’ve ever played with the AI program Midjourney that creates artistic images, you know that it often creates weirdness, like three-armed people, or hands with seven fingers. Dreams often have such mistakes.

When AIs produce fictional results, the computer scientists say the AI is hallucinating. If you pay close attention to people, you’ll know we all live by many delusions. I believe programs like ChatGPT mimic humans in more ways than we expected.

I don’t think science is anywhere close to explaining how the brain produces the observer, that sense of I-ness, but science is getting much closer to understanding how intelligence works. Computer scientists say they aren’t there yet, and plan for AGI, or artificial general intelligence. They keep moving the goal. What they really want are computers much smarter than humans that don’t make mistakes, which don’t hallucinate. I don’t know if computer scientists care if computers have awareness like our internal watchers, that sense of I-ness. Sentient computers are something different.

I think what they’ve discovered is intelligence isn’t conscious. If you talk to famous artists, writers, and musicians, they will often talk about their muses. They’ve known for centuries their creativity isn’t conscious.

All this makes me think about changing how I train my model. What if I stopped reading science fiction and only read nonfiction? What if I cut out all forms of fiction including television and movies? Would it change my personality? Would I choose different prompts seeking different forms of output? If I do, wouldn’t that be my unconscious mind prompting me to do so?

This makes me ask: If I watched only Fox News would I become a Trump supporter? How long would it take? Back in the Sixties there was a catch phrase, “You are what you eat.” Then I learned a computer acronym, GIGO — “Garbage In, Garbage Out.” Could we say free will exists if we control the data, we use train our unconscious minds?

JWH

How Anne Got Phished and What We Should Learn from Her Experience

by James Wallace Harris, 2/2/24

My friend Anne called me the other day terribly upset. Her bank had just called her to say her account had been hacked. She was worried that her computer was the tool of the hackers and wanted me to look at it. Anne was freaked out and called me because I’m her computer guy.

The first thing I asked her was, “How do you know it was the bank who called you?” She said the bank’s name and phone number came up on her phone. I told her she needed to call her bank and confirm that. I told her the bad guys can pretend to be anyone. Anne said she would do that immediately.

When I didn’t hear from her for a couple of hours, I called her. A man answered. I didn’t think it was her husband but asked for Anne. A woman got on the line I didn’t know. I again asked for Anne. She said she was Anne. I was suspicious, so I asked this time giving Anne’s full name. She said, “Yes, that’s me.” I said, “No, you’re not.” and hung up.

I couldn’t call Anne, but I thought a text might get through. I texted “Call me right now.” The real Anne called me. I told her what happened. She said she’d been on the phone with her bank for hours and she had been phished. They stole $3500. I told her she needed to call her phone company immediately. “Tell them your calls are being redirected.” A couple of hours later, she called back to say her forwarding had been set to another number and the phone company had turned that off. Anne said the phone company couldn’t help her anymore and would notify their security people, but it would take a few days.

My guess is the phishers had gotten ahold of hacked data from Anne’s bank, so they knew a lot about her, enough to convince them they were the bank. The phishers then conned Anne into giving them more information. Then they rigged her phone so they would get her calls. That would allow them to confirm any transfer requests. That’s very clever.

Anne knew she had been duped, and it made her feel stupid. Anne is no dummy. She has two undergraduate and two master’s degrees. But we want to trust people, especially banks. We trust banks with our money, so we want to believe they’re dependable.

Anne brought over her laptop for me to check. It didn’t seem to have any malware or viruses, but Google would not work using Chrome. I couldn’t change anything on the computer because an IT department had it locked down. I don’t know if that was a coincidence that Google stopped working, or if the phishers had somehow jammed Chrome and Google without needing administrative rights. They wouldn’t have needed to hack her computer to steal the money, but it might have helped them by keeping Anne from searching for help.

Anne was still upset, frequently crying, and embarrassed by this event. Her bank had immediately replaced the money, but Anne was still afraid something else was going to happen. She’s so afraid that she’s changed banks and doing everything she can to protect herself. She cried off and on for days. At first, she didn’t want me to tell anyone because she was embarrassed about being conned. But I said she should tell everyone she knew to help other people avoid getting phished too. That’s when she said I could blog about her.

Now I’m worried. I’m thinking about all the things people should do to protect their identity and money. Once I started thinking about it, I realized the problem is immense. What should I do to be more proactive? We generally think of “identity thief” in terms of people, but phishers also steal the identity of banks. Solving phishing would require perfect identification of people and corporations. But since nearly everything happens online today, it’s easy to spoof both kinds of identity.

An antivirus program won’t protect you from this kind of theft, although the best ones try. Norton has a nice tips page, “How to protect against phishing: 18 tips for spotting a scam.” Its focus is on phishing emails because that’s what their software can deal with, but you also need to consider phone calls or even people coming to your door.

There are also all kinds of anti-fraud services for credit cards, but I don’t know enough about them yet. AARP has a whole website devoted to “Scams & Fraud.” It even has an article, “Bank Impersonation Is the Most Common Text Scam.” It makes me want to join AARP, but I wonder about trusting a company that has so many ads and popups.

Remember the old days when you had to go to the bank in person? And the bank was a big, impressive building? The digital world is both insubstantial and so damn shady. Since I read a lot of science fiction, I think of what the future might bring to solve phishing and identity theft.

The core problem is verification of identity. Right now, thieves can be you with a username and password. Hell, my iPhone needs facial identification before it will talk to me, so why don’t banks want better verification? You’d think banks would want two or three kinds of biometric proof of your identity before they transfer any of your money. But then, how do you verify your bank is your bank?

Another thing that worries me is the number of companies that have my credit card on file, or my bank account routing number. I hear about big companies getting hacked all the time. Maybe there should be a law against storing such financial information, or even personal information. It would be a pain in the ass if I had to fill out all my information every time I ordered an ebook from Amazon, but it might be worth it. PayPal is one solution to hide credit card information.

Just a bit of searching the internet on how to protect myself from fraud reveals it could be a subject worthy of a college major. Right now, banks and stores cover digital theft, but will that always be true? Insurance companies that insure homes are going out of business in some states because of too many natural disasters. Some retail chains are closing stores in areas where there’s too much “shrinkage” in their inventories. So, I can imagine banks going bankrupt or refusing some types of customers.

Right now, banks are making more money by laying off human tellers and using online systems. They probably save enough money downsizing buildings to web servers even with the cost of covering phishing theft. But at some point, they will decide that the cost is too high. I think the reason many people want to elect Donald Trump again is because they secretly want more of a police state. They’re tired of all the crime and cons. One way to solve it is to use computers and the internet. Americans never wanted national identity cards, but what will they think of being chipped like a dog? Things could get very weird in the future. If we really knew the absolute identity of every person and their location, it would solve a lot of crimes, but what would it mean to personal freedom?

Anne just called me. She’s learning. She got a phishing attempt in her email. She called to see if I thought it was the bank phishers. I didn’t think so. I told her that The International Guild of Phishers kept a dummies list on the Dark Web to share with each other.

At least she laughed at that.

JWH

I’m Too Dumb to Use Artificial Intelligence

by James Wallace Harris, 1/19/24

I haven’t done any programming since I retired. Before I retired, I assumed I’d do programming for fun, but I never found a reason to write a program over the last ten years. Then, this week, I saw a YouTube video about PrivateGPT that would allow me to train an AI to read my own documents (.pdf, docx, txt, epub). At the time I was researching Philip K. Dick, and I was overwhelmed by the amount of content I was finding about the writer. So, this light bulb went off in my head. Why not use AI to help me read and research Philip K. Dick. I really wanted to feed the six volumes of collected letters of PKD to the AI so I could query it.

PrivateGPT is free. All I had to do was install it. I’ve spent days trying to install the dang program. The common wisdom is Python is the easiest programming language to learn right now. That might be true. But installing a Python program with all its libraries and dependencies is a nightmare. What I quickly learned is distributing and installing a Python program is an endless dumpster fire. I have Anaconda, Python 3.11, Visual Studio Code, Git, Docker, Pip, installed on three computers, Windows, Mac, and Linux, and I’ve yet to get anything to work consistently. I haven’t even gotten to part where I’d need the Poetry tool. I can run Python code under plain Python and Anaconda and set up virtual environments on each. But I can’t get VS Code to recognize those virtual environments no matter what I do.

Now I don’t need VS Code at all, but it’s so nice and universal that I felt I must get it going. VS Code is so cool looking, and it feels like it could control a jumbo jet. I’ve spent hours trying to get it working with the custom environments Conda created. There’s just some conceptual configuration I’m missing. I’ve tried it on Windows, Mac, and Linux just in case it’s a messed-up configuration on a particular machine. But they all fail in the same way.

I decided I needed to give up on using VS Code with Conda commands. If I continue, I’ll just use the Anaconda prompt terminal on Windows, or the terminal on Mac or Linux.

However, after days of banging my head against a wall so I could use AI might have taught me something. Whenever I think of creating a program, I think of something that will help me organize my thoughts and research what I read. I might end up spending a year just to get PrivateGPT trained on reading and understanding articles and dissertations on Philip K. Dick. Maybe it would be easier if I just read and processed the documents myself. I thought an AI would save me time, but it requires learning a whole new specialization. And if I did that, I might just end up becoming a programmer again, rather than an essayist.

This got me thinking about a minimalistic programming paradigm. This was partly inspired by seeing the video “The Unreasonable Effectiveness of Plain Text.”

Basically, this video advocates doing everything in plain text, and using the Markdown format. That’s the default format of Obsidian, a note taking program.

It might save me lot of time if I just read the six volumes of PKD’s letters and take notes over trying to teach a computer how to read those volumes and understand my queries. I’m not even sure I could train PrivateGPT to become a literary researcher.

Visual Studio Code is loved because it does so much for the programmer. It’s full of artificial intelligence. And more AI is being added every day. Plus, it’s supposed to work with other brilliant programming tools. But using those tools and getting them to cooperate with each other is befuddling my brain.

This frustrating week has shown me I’m not smart enough to use smart tools. This reminds me of a classic science fiction short story by Poul Anderson, “The Man Who Came Early.” It’s about a 20th century man who thrown back in time to the Vikings, around the year 1000 AD. He thinks he will be useful to the people of that time because he can invent all kinds of marvels. What he learns is he doesn’t even know how to make the tools, in which to make the tools, that made the tools he was used to in the 20th century.

I can use a basic text editor and compiler, but my aging brain just can’t handle more advance modern programming tools, especially if they’re full of AI.

I need to solve my data processing needs with basic tools. But I also realized something else. My real goal was to process information about Philip K. Dick and write a summarizing essay. Even if I took a year and wrote an AI essay writing program, it would only teach me a whole lot about programming, and not about Philip K. Dick or writing essays.

What I really want is for me to be more intelligent, not my computer.

JWH