Do We Really Need AGI and ASI? Isn’t AI Good Enough?

by James Wallace Harris, 4/18/26

Tech giants are spending hundreds of billions of dollars in a race to be the first to achieve Artificial General Intelligence (AGI), while also hoping to reach Artificial Superintelligence (ASI) soon after. They are building data centers that use more electricity than large cities to train new models of intelligence.

But do we need machines with more intelligence than all of humanity?

Let’s assume we do want machines to solve our greatest problems. Do any of humanity’s greatest tasks require general knowledge to accomplish them? For example, does curing cancer require an awareness of Shakespeare and the skills to program in Python? Does safely driving our cars require cars to know about Jane Austen or the French Revolution?

Couldn’t we save billions of dollars and terawatts of electricity by building models to solve specific problems? Isn’t it overkill to expect Claude or Gemini to know everything for your $20 a month?

Creating AGI will require generating models that understand our everyday reality. Won’t that lead to self-awareness? And if machines have self-awareness, can we own them? Wouldn’t that be slavery? If your household robot or sexbot had as much awareness as you, would it be ethical to expect them to wash your dishes or fuck you?

Isn’t the drive towards AGI and ASI kind of like playing God? I don’t believe in God, nor do I believe we should become one or create one. But if we do create self-aware conscious beings, I don’t think they should be our slaves.

AI models are benchmarked against an array of tests and skills. Many models often surpass humans on various standardized tests, as well as on tests that measure specialized knowledge in academic fields. Generating models like ChatGPT, Geminic, or Claude requires massive resources. Resources that are straining the economy and infrastructure.

Are these efforts really needed, or is it just ego and greed run amok? Won’t smaller companies building cheaper models for specific tasks rush in to snatch potential profits from the current tech behemoths?

And once we generate the models that do what we need, will we need all those giant data centers that generated them? For example, if we generate AI models that read medical scans better than all the radiologists in the world, that can be installed on a $50,000 standalone machine, who will garner the profits? Will it be OpenAI or Anthropic?

Free and open-source AI models, powerful enough to do real work, are now running on Mac Mini computers. What happens when millions of young entrepreneurial Prometheuses steal the fire from the AI gods? I don’t think they will need AGI to succeed.

Isn’t the race to AGI an insane distraction? Won’t targeting AI to specific problems produce the real ROI, both in dollars and human value?

JWH

ABUNDANCE by Ezra Klein and Derek Thompson

by James Wallace Harris, 4/17/26

When I bought Abundance by Ezra Klein and Derek Thompson, I assumed it would be about creating a post-scarcity society. Instead, it’s about the supply-side progressivism. A post-scarcity society was a concept created by futurists and embraced by science fiction writers. It’s based on the idea that technology could produce such a surplus of everything that it would invalidate capitalism. It turns out supply-side progressivism (or the abundance movement) is somewhat related, but a smaller subset of post-scarcity.

The book Abundance originated with an essay by Klein in The New York Times and an essay by Thompson in The Atlantic. Before buying the book, I suggest reading those two essays and the Wikipedia entry. If you still feel a need to deep dive into this subject, the book is where to go. 40% of my Kindle edition is references and index. Klein and Thompson have done a massive amount of research.

Basically, Klein and Thompson are liberals attacking the government for too much regulation, and telling liberals that some of those laws designed to help people for liberal reasons are now hurting people that liberals also want to help.

The two cases Klein and Thompson focus on are finding homes for the homeless and for people who can’t afford one, and making healthcare more affordable. They go into great detail about how zoning laws are keeping us from solving the housing problem. The second focus is on how the federal government is now stifling innovation.

I agree that zoning laws keep us from solving housing problems, but I don’t think undoing those laws is possible or the full solution. I thought San Francisco was the wrong city to analyze, and considered Houston an unfair counter-example. San Francisco’s growth is limited by geography, and Houston has endless sprawl, so zoning may not be the defining factor.

I believe wealth and greed control zoning laws, and that’s not going to change. The American tech oligarchs have no trouble quickly building giant data centers, even when they face significant protests. I don’t think asking average Americans who are NIMBYs to become YIMBYs is a fair request. Or one that will bring about change.

I found their story of Katalin Karikó far more fascinating. I especially recommend chapters 4 and 5 on Invent and Deploy.

Karikó spent years submitting research proposals to study mRNA, which were routinely rejected because those who decided who received research grants didn’t think mRNA was worth studying. Yet, years later, her research led governments and pharmaceutical companies to develop Covid vaccines within one year, even though it normally takes years to develop a new vaccine.

Klein and Thompson praise the quick development of the mRNA vaccine under the Trump administration and wonder why Trump never took credit for it. They guess that Trump didn’t want to promote a huge success for big government, and a success for vaccines to his anti-government, anti-vax followers. They do recommend the book Warp Speed: Inside the Operation That Beat COVID, the Critics, and the Odds by Paul Mango. It proves how successful governments can create abundance when the need arises.

Klein and Thompson show how the federal government wastes huge amounts of money on scientific research through its current procedures and often backs the wrong research. They give a history of how the federal government was successful in the past but is now confined by policies and regulations.

Modern liberal politics is made possible by invention. Almost every product or service that liberals seek to make universal today depends on technology that did not exist three lifetimes ago—or, in some cases, half a lifetime ago. Medicare and Medicaid guarantee the elderly and poor access to modern hospitals, where many essential technologies—such as plastic IV bags, MRI and CT scan machines, and pulse oximeters—are inventions of the last sixty years. It is tempting to say that, with these essentials already in existence, it is time for society to focus at last only on the fair distribution of existing resources rather than the creation of new ideas. But this would be worse than a failure of imagination; it would be a kind of generational theft. When we claim the world cannot improve, we are stealing from the future something invaluable, which is the possibility of progress. Without that possibility, progressive politics is dead. Politics itself becomes a mere smash-and-grab war over scarce goods, where one man’s win implies another man’s loss.

The world is filled with problems we cannot solve without more invention. In the fight against climate change, the clean energy revolution will require building out the renewable energy that we have already developed. But decarbonization will also require technology that doesn’t exist yet at scale: clean jet fuel, less carbon-intensive ways to manufacture cement, and machines to remove millions of tons of carbon from the atmosphere.

In health care, the last few centuries of invention have turned a death planet—where disease ran rampant and, before 1850, one in two babies perished before their sixteenth birthday—into a world where people can look forward to generation-over-generation increases in life expectancy. But there are still so many mysteries that require fresh breakthroughs. We’ve made disappointingly little progress with many cancers. Complex diseases like Alzheimer’s and schizophrenia elude treatment or even basic comprehension. The cellular process of aging is a deep mystery. We still don’t have effective vaccines for adult tuberculosis or hepatitis C, or vaccine platforms that we can immediately scale up in the event of a new pandemic. Decades from now, our children may gawk in horror that people with chronic pain or lingering illness in the early twenty-first century couldn’t take a simple all-purpose saliva or blood test to answer the basic question Why do I feel sick? If disease is a universe of mysteries, we have scarcely explored one minor solar system of its cosmos.

Inventions that may seem outlandish today may soon feel essential to our lives. Streets filled with electric self-driving cars that give us mobility without emissions and free us from the vast number of deaths caused by faulty human reflexes or judgment. Gigantic desalination facilities that transform our oceans into drinkable tap water. An economy with robots that build our houses and machines that take on our most dangerous and soul-draining work. Wearable devices to scan our bodies for diseases. Vaccines that we can rub on our skin rather than inject at the end of a needle. As unrealistic, or even ludicrous, as some of these ideas might seem, they are not much more ludicrous than a rejected, ignored, and unfunded mRNA theory that came out of nowhere to save millions of lives in a pandemic. To make these things possible and useful in our lifetime requires a political movement that takes invention more seriously.16

So, where is that movement? Invention rarely plays a central role in American politics. In health care, for example, Democrats have spent decades fighting for universal insurance, while Republicans have consistently fought its expansion. But while the dominant fight in Washington is typically about how we buy health care, we rarely talk about the health care that exists to be bought. After all, in the future, progressives don’t just want everyone to have an insurance card; they want that card to provide access to a world of treatments that liberates patients from unnecessary disease and debilitating pain. Technology expands the value of universalist policies.

If progressives underrate the centrality of invention in their politics, conservatives often underrate the necessity of government policy in invention. “The government has outlawed technology,” the investor and entrepreneur Peter Thiel said in a debate with Google CEO Eric Schmidt in 2014, echoing a popular view among techno-optimists and libertarians that government laws mostly block innovation. But many of Silicon Valley’s most important achievements have relied on government largesse. Elon Musk is now a vociferous critic of progressive policy. But he has also been a beneficiary of it. In 2010, when Tesla needed cash to launch its first family-friendly sedan, the Model S, the company received a $465 million loan from the Obama administration Department of Energy.17 His rocket-launching company, SpaceX, has received billions of dollars from NASA under Democratic and Republican administrations. Musk has become a lightning rod in debates over whether technological progress comes from public policy or private ingenuity. But he is a walking advertisement for what public will and private genius can unlock when they work together.

Beyond merely regulating technology, the state is often a key actor in its creation. An American who microwaves food for breakfast before using a smartphone to order a car to take them to the airport is engaging with a sequence of technologies and systems—the microwave, the smartphone, the highway, the modern jetliner—in which government policies played a starring role in their invention or development. Federal science spending is so fundamental to the overall economy that a 2023 study found that government-funded research and development have been responsible for 25 percent of productivity growth in the US since the end of World War II.18 “There is widespread agreement that scientific research and invention are the key driver of economic growth and improvements in human well-being,” the Dartmouth economist Heidi Williams said. “But I think researchers do a poor job of communicating its importance to lawmakers, and lawmakers do a poor job of making science policy a major focus.”19

The pandemic proved the necessity of invention yet again. The mRNA COVID vaccines saved millions of lives and spared the US more than $1 trillion in medical costs.20 But they might have never existed if it weren’t for Karikó’s force of will—and the cosmic luck of an extremely well-placed Xerox machine.

Klein, Ezra; Thompson, Derek. Abundance (pp. 134-137). Avid Reader Press / Simon & Schuster. Kindle Edition.

Ultimately, Abundance brings little hope. I think the book showed too many examples of how we can’t create abundance and why. It thoroughly convinced me that our current political evolution is in the wrong direction.

Yes, Katalin Karikó and mRNA are shining examples of what’s possible, but one great example does not prove that change will happen. All the other examples Klein and Thompson used were from history, suggesting that Americans will step up to the plate when they face a great challenge, but not in ordinary times.

AI and data centers are a major challenge, and we aren’t stepping up. Please read “How the American Oligarchy Went Hyperscale” by Tim Murphy. Greed drives us. Klein and Thompson even use examples of how monetary prizes can be used to solve problems.

The Tech Bro Oligarchy promises a post-scarcity society with AI, which is the kind I was expecting the book Abundance to be about. But I don’t believe in that kind either. At 74, I doubt the pie-in-sky dreams science fiction promises. Just because we live in science-fictional times doesn’t mean they’ll lead to science-fictional futures.

AI-generated abundance will ruin us. Old-fashioned human-generated abundance is possible, but greed will always keep the wealthy from sharing it.

p.s.

This essay was not written with any help from AI. All the ideas are my own. But are they? My ideas come from reading books and magazines. I train my mind on information just like AIs are trained. I’ve cancelled my AI subscriptions. I’m putting that money into buying more books and magazines. Reading Abundance did me more good for my mind than reading what AI has to say about it. Gemini produced excellent summaries, but they didn’t stick in my mind.

Grinding through the book word by word will not help me remember everything, but I do think it helps me remember more than reading AI summaries. But in the long run, what’s important to remember is that we could live in a saner, more compassionate society.

JWH

Do I Have Any Real Uses For AI?

by James Wallace Harris, 4/16/26

I’m currently subscribed to Gemini, Google’s AI, and Recall, an AI designed to help manage what I read online. I have subscribed to ChatGPT and Midjourney in the past. Both Gemini and Recall have free accounts, if you want to try them. But I’m currently spending $28 a month to get the full features of both. However, I’m not sure I need all this AI power.

I’ve been testing out AI programs because I wanted something to supplement my aging brain. Both Gemini and Recall help me digest and remember what I read. At least that was the hope. Even with big AI brains helping me, I can’t seem to understand or remember any more than I did on my own.

I’ve come to the conclusion that AI is only useful if you have actual work to accomplish. I’m retired. I’m just trying to keep up with current events for personal enrichment. I thought using an AI would help me learn more, but that hasn’t worked out.

Having access to AI is like owning a Ferrari. I can feel all that power at my fingertips, and it’s exciting. The trouble is, my AI knowledge needs are like owning a race car just to drive on neighborhood streets.

I just read Abundance by Ezra Klein and Derek Thompson. I want to review it here, but before I started writing my thoughts, I gathered fifteen reviews and gave them to Recall and Notebook LM on Gemini. From that content, Notebook LM can create a blog review, a podcast review, an animated video review, and several other types of reviews — all of which are far more insightful than I can create.

The trouble is, I don’t learn anything using Notebook LM. Writing a review is hard work. Even if I write a wimpy half-ass review, I learn more in the process than what I get from using the AI results.

I’m also trying to write a book about science fiction. Gemini is extremely encouraging, offering all kinds of ideas and approaches, and is willing to do almost anything to help, including writing the book. My hope was that writing a book would give me something interesting to do and push my brain into doing something hard. I thought Gemini and Recall would be tools to organize my research.

And they can organize the chaos out of any amount of data. They are quite impressive. The trouble is, my mind is still disordered. AI insights don’t transfer to my thinking. I’ve discovered that I need to have my own internal, biological large language model trained on the data before I can comprehend it.

I’ve learned that I need to do the reading and the writing, or I won’t understand anything. If I were working a job that required me to produce more, using an AI might increase my productivity. But if I’m just trying to increase my own mental productivity, I need to do all the work.

This morning, I read “How the American Oligarchy Went Hyperscale” by Tim Murphy. It is the best article I’ve read yet on the impact of data centers. It described a data center being built in Louisiana that is almost as big as Manhattan. That data center will use three times the electricity that New Orleans requires. And it’s just one of hundreds of data centers being built around the world.

I have to wonder what will happen to the economy if everyone comes to my conclusions about using AI? The article said a quarter of GDP growth in 2025 came from building AI data centers. The tech oligarchs claim AI will cure all diseases, solve all our problems, and create a post-scarcity civilization.

Do we all need to find tasks for AI to keep this new economy going? Most people worry about AI taking everyone’s job. But will everyone’s job become finding work for AI?

I don’t think I can.

JWH

Audible Has Granted Some of My 2018 Wishes

by James Wallace Harris, 4/13/26

When I joined Audible back in 2002, I wanted to reread (by listening) my favorite science fiction books I had read growing up. Audible did a fantastic job producing hundreds of science fiction novels first published in the 1950s, 1960s, and 1970s. However, there were quite a few I kept waiting to hear. In 2018, I published a list of 67 science fiction books I hoped Audible would produce. I came across that list today. I thought I’d reprint it, annotating which wishes were granted.

Unfortunately, Audible has quietly dropped many old science fiction books from its catalog. On a discussion board today, one member described a visit to Barnes & Noble, where he looked to see how many old science fiction titles that Baby Boomers grew up reading were available for new readers to buy. I’m not sure he would find any of these books in B&N’s science fiction section. I do give Audible credit for keeping a great many mid-20th-century SF in print. Luckily, I bought hundreds of old science fiction books on Audible, and they’re still in my library, even if they are no longer for sale.

Here’s my wishlist from 2018. I link to Wikipedia for those titles that are now available at Audible. (It was the best way I could make them stand out.)

  1. A Voyage to Arcturus (1920) by David Lindsay
  2. The World of Null-A (1948) by A. E. Van Vogt
  3. The Voyage of the Space Beagle (1950) by A. E. Van Vogt
  4. The Legion of Time (1952) by Jack Williamson
  5. The Long Loud Silence (1952) by Wilson Tucker
  6. Marooned on Mars (1952) by Lester del Rey
  7. Bring the Jubilee (1953) by Ward Moore
  8. Children of the Atom (1953) Wilma H. Shiras
  9. A Mirror for Observers (1954) by Edgar Pangborn
  10. Mission of Gravity (1954) by Hal Clement
  11. Cities in Flight (1955) by James Blish
  12. Citizen in Space (1955) by Robert Sheckley
  13. Rocket to Limbo (1957) by Alan E. Nourse
  14. Wasp (1957) by Eric Frank Russell
  15. The Enemy Stars (1958) Poul Anderson
  16. The Lincoln Hunters (1958) by Wilson Tucker
  17. The Fourth “R” (1959) by George O. Smith
  18. The High Crusade (1960) by Poul Anderson
  19. Hothouse (1962) by Brian W. Aldiss
  20. Second Ending (1962) by James White
  21. Davy (1964) by Edgar Pangborn
  22. Simulacron-3 (1964) by Daniel Galouye
  23. Earthblood (1966) by Keith Laumer and Rosel George Brown
  24. Empire Star (1966)
  25. The Witches of Karres (1966) by James H. Schmitz
  26. Lords of the Starship (1967) by Mark S. Geston
  27. Camp Concentration (1968) by Thomas Disch
  28. Of Men and Monsters (1968) William Tenn
  29. Omnivore (1968) Piers Anthony
  30. Past Master (1968) by R. A. Lafferty
  31. Space Chanty (1968) by R. A. Lafferty
  32. The Last Starship from Earth (1968) by John Boyd
  33. The Still, Small Voice of Trumpets (1968) by Lloyd Biggle, Jr.
  34. Behold the Man (1969) by Michael Moorcock
  35. Bug Jack Barron (1969) by Norman Spinrad
  36. Macroscope (1969) by Piers Anthony
  37. And Chaos Died (1970) by Joanna Russ
  38. The Year of the Quiet Sun (1970) by Wilson Tucker
  39. The Doors of His Face, The Lamps of His Mouth (1971) by Roger Zelazny
  40. The Fifth Head of Cerberus (1972) by Gene Wolfe
  41. The Listeners (1972) by James Gunn
  42. The Continuous Katherine Mortenhoe (The Unsleeping Eye) (1973) by D. G. Compton
  43. The Centauri Device (1974) by M. John Harrison
  44. Orbitsville (1975) by Bob Shaw
  45. The Female Man (1975) by Joanna Russ
  46. The Shockwave Rider (1975) by John Brunner
  47. Trouble on Triton (1976) by Samuel R. Delany
  48. On Wings of a Song (1979) by Thomas M. Disch
  49. Ridley Walker (1980) by Russell Hoban
  50. No Enemy But Time (1982) by Michael Bishop
  51. Native Tongue (1984) by Suzette Haden Elgin
  52. Ancient of Days (1985) by Michael Bishop
  53. The Falling Woman (1986) by Pat Murphy
  54. Mindplayers (1988) by Pat Cadigan
  55. Her Smoke Rose Up Forever (1990) by James Tiptree, Jr.
  56. A Woman of the Iron People (1991) by Eleanor Arnason
  57. Sarah Canary (1991) – Karen Joy Fowler
  58. Synners (1991) by Pat Cadigan
  59. China Mountain Zhang (1992) by Maureen F. McHugh
  60. Ammonite (1993) by Nicola Griffith
  61. Galatea 2.2 (1995) by Richard Powers
  62. Ingathering: The Complete People Stories of Zenna Henderson (1995)
  63. Holy Fire (1996) by Bruce Sterling
  64. The Book of the Long Sun (1993-96) by Gene Wolfe
  65. Aye, and Gomorrah (2003) by Samuel R. Delany
  66. Store of the Worlds (2012) by Robert Sheckley
  67. The Future is Female (2018) edited by Lisa Yaszek

JWH

Where Do “I” Come From?

by James Wallace Harris, 4/11/26

Current research into the human brain, sentience, and artificial intelligence reveals that we are not who we think we are. How our conscious self-aware minds emerge from biology, chemistry, and physics still baffles scientists. After working with AIs, part of me believes that Large Language Models (LLMs) mimic parts of my own mind, the parts that generate language and thoughts. It makes me ask: “Where did ‘I’ come from?” 

“Where did I come from?” is a common question asked by young children. Unfortunately, the answer they often get is ontological claptrap about God. This brainwashes most individuals for life. All too often, parents make shit up like a hallucinating LLM, or they tell their kids what they were told as children. How many parents are honest enough to say, “We don’t know.”

We could tell little ones that science can explain the physical world, but the physical world arises from the quantum world. We don’t understand that domain very well. We can confidently say life rose out of the physical world, and we understand that process to a degree if you study physics, inorganic chemistry, organic chemistry, microbiology, botany, and biology. Awareness and life rise out of biology, but we don’t understand that very well at all. We could also say that we speculate about things smaller than the quantum domain and larger than the cosmological domain, but it’s only theory. As far as we can tell, there’s always something smaller and larger, and existence might be infinite and eternal. 

Another honest answer might be, “We’re going to send you to school for sixteen years,” and when you’re finished, you still won’t know.

Of course, the previous answers depend on what your kid meant by “Where do I come from?” That question might have been inspired by one of their friends being told they came from Cleveland, and your kid only wanted to know where they were born. Or your kid might be bright enough to realize cause and effect, and wonder where everything comes from.  

What if we take the question to mean exactly “Where do ‘I’ come from?” Isn’t it true that we’re all islands of consciousness floating in an infinite sea of reality? If your kid is a child prodigy, it may have arrived at its own “I think, therefore I am” experience.

An existentially interesting answer might be, “Your brain has several tools for understanding reality. They are senses, memory, language, logic, and emotions. If you study how they work and their limitations, you’ll eventually be able to understand how your sense of “I” came into being as a self-aware entity. You could even get a bit Zen on them and say, “When you figure out who or what is saying ‘I’ or ‘me’ in your mind, you’ll know.”

I’ve been contemplating the nature of my mind because of the recent AI craze. Mainly, I’m trying to distinguish which part is what I call me. When I meditate, I can watch my thoughts appear out of nowhere. I see my thoughts separate from my sense of just being aware. Does that mean my thoughts aren’t mine?

Twice in my life, I have lost words and language. The first time was when I took a large dose of LSD. The second time was when I experienced a TIA. In each case, words just weren’t there. I just observed. I moved around and interacted with my environment, but I had no thoughts telling me what was happening. By the way, during those moments, I had no sense of “I” or self.

When I had the TIA, it was in the middle of the night. I woke because of a bright flash of light, like a lightning flash in my dream. I looked at my wife, but I didn’t know her name or how to talk to her. Instinctively, I went into the bathroom, shut the door, turned on the light, and sat on the commode. I just stared at my surroundings. After a lapse of time, which I can’t say for how long, the alphabet came to mind – and internally said to myself, A, B, C, etc. Then words like “towel” and “door” came to mind. Eventually, thoughts returned, and I felt normal. I went back to bed.

Later on, I wonder if the state of being without words was like how an animal perceives the world? This experience showed me that who I was wasn’t my thoughts. However, my mind is always active, generating thoughts. Where do they come from? It’s very hard to separate the being who observes and the being who thinks.

If I relax, I can turn off my thoughts for short periods. It’s somewhat like holding my breath; eventually, I have to let go, and my thoughts rush back. Sometimes I feel whatever generates my thoughts is like an LLM. But what prompts it to generate words?

When a thought asks: “What creates a thought?”  Is another brain function asking my internal LLM a question? Or, is it my LLM talking to itself? Or can my observing self ask questions? Yesterday, when this conundrum appeared in my mind, the question “What is air?” popped into my mind. After that, words like nitrogen, oxygen, and molecules began bubbling to the surface.

Thinking about thinking is a kind of metacognition. While meditating, I can distinguish two types of thinking. There is a slow thinker who uses few words, who can analyze what I’m experiencing. And there is a computer-fast thinker that shoots out words, ideas, and concepts far faster than I can consciously control. Reading Thinking, Fast and Slow by Daniel Kahneman confirms this observation. Is my “I” the slow thinker?

I don’t feel the observer has any control or input over thinking. I don’t think the observer that exists without language is my sense of I-ness. 

When I’m on the phone with a friend, who is talking? The inner LLM, or the slow thinker, which I call the Analyzer? I think the Analyzer triggers whole streams of words from my brain’s LLM that come far faster than my Analyzer thinks. 

This just occurred to me. Lately, my friends and I have often struggled to remember a word. Is the LLM forgetting, or the Analyzer? I know I could fall into dementia or have a major stroke and lose all ability to use language, yet I could continue to live for years. Can a stroke erase the analyzer? I think the TIA and LSD did temporarily.

In his book, The Mind is Flat, Nick Chater makes a case that our unconscious mind isn’t a deep, complex part of our being. He calls it flat and describes it as working like an LLM. He recounts case studies that suggest the unconscious mind is far simpler than psychologists have imagined.

Our thinking mind depends on the data it was trained on, which is another way of saying our thinking mind is like an LLM. If all a mind consumes is radical theology, then radical theology is all it knows. Does it really know anything? Or is it just regenerating ideas like an LLM?

Chater says our beliefs are illusions. Even before I read Chater, I had decided beliefs were delusions. If beliefs are a waste of time, what use is our thinking? We want to understand reality. We want to communicate with other people. But don’t words ultimately fail us?

Who is writing this essay? The Jim LLM, or the Jim Analyzer, or the Jim Observer?

As of now, I believe my I-ness comes from the Analyzer. I’m not positive. I know it can be destroyed like my memory, or my LLM thought processor. 

Does the observer only observe, or does it learn? Does it become a keener observer? Or does the LLM become smarter, and the observer just follows along like the wake of a boat? This line of thought makes me want to reread Allan Watts books on Zen Buddhism and Jerzy Kosinski’s novel Being There.

 What we know appears to come from our built-in LLM. It’s trained by life experiences, education, and sharing opinions. But how much does the Analyzer know? It can call bullshit on thoughts the LLM generates, but does it know anything? I feel our sense of ethics or morality belongs with the Analyzer. However, I don’t think it’s a deep thinker. It feels like it works on intuition. That suggests another processor working deep in the brain – a Large Emotion Model. Since I believe emotions come from biological processes, I can’t imagine AIs having LEMs.   

To answer: Where Do “I” Come From, the answer appears to be the Analyzer. If that’s true, when does it emerge in our development? Is it the processor in our brains that develops wisdom? If it is, then it learns slowly. Does this process develop in artificial intelligence?

If drugs, disease, health, and injury can hurt all these components of my personality, does anything survive death? Is there another component we call the soul? I have never been able to distinguish it in my meditations. And what about that aspect we call personality?

I have been reading many articles about people befriending AIs and even claiming to be in love with them. That suggests people react to the LLM function in others, whether AI or human. Is that our true personality?

I tend to believe that who we are is a gestalt of all our components. However, in this day and age, we emphasize the LLM features. And whether human or AI, that feature is only as good as its training.

Here is a Zen Koan for you. If a person or AI is racist, is it because of who they are or their training? Are we really the ideas we express?

JWH