FiiO JT7 $119 Planar Magnetic Headphones

by James Wallace Harris, 4/21/26

I’m a sucker for audiophile reviews that claim a new product sounds great for the money. My current headphones are Sennheiser HD 560 S, which sound wonderful, and I was completely happy playing through a FiiO K5Pro headphone amplifier. But then I saw several reviews praising the FiiO K13 R2R Desktop DAC & Headphone Amp. I’ve always wanted to try an R2R DAC, so I bought it. (See my review.)

The Fiio K13 R2R was very good, but it didn’t produce that night-and-day difference I was expecting. After years of seeing reviews of planar magnetic headphones, I’ve wanted to try them too, hoping the technology would take my music to a new level. That’s why I ordered the FiiO JT7 headphones. Plus, they offered two sets of cables, one of which worked with balanced circuits. In other words, the JT7 had two tech upgrades to try.

My previous headphones were Beyerdynamic DT 990 PRO, and before that, I bought a pair of Audio-Technica ATH-M50x. Astute observers will notice that all of this equipment originally cost between $100-$200. I’m not sure if that isn’t the limiting factor determining the sound quality.

Between four headphones and two headphone amplifiers, I got a range of treble and bass responses, sound staging, and musical details. But nothing was ever night-and-day. I can say R2R sounds smoother than the ESS Delta-sigma DAC, but the Delta-sigma DAC had more detail. I can say my open-back headphones have a larger soundstage than closed-back headphones.

After switching between the four headphones and two headphone amps, the biggest factor in determining what I liked was power. The DT990s have 250 Ohm impedance. The Sennheisers have 120 Ohms. The ATH-M50x are 36 Ohms. And the JT7 is just 18 Ohms, but with a relatively low sensitivity of 92 dB/mW.

Comparing these four headphones is very difficult because once I got them to the same volume, which is very subjective, they were hard to tell apart. Like I said, soundstage and instrument placement varied, but mainly between the open-back and closed-back headphones. The overall tone varied between the K5Pro and K13. That was because of the ESS and R2R DACs.

I don’t know if I can ever find a true night-and-day difference in my audio equipment unless I spend a great deal more money. But the headphones don’t sound significantly different through my Bluesound Node 2i or AudioLab 6000A amplifiers, both of which cost over a $1000.

I’ve listened to “True Love” by Anna Ash so many times that I’m not sure which headphones actually delivered better sound quality. There are so many variables. I prefer the Sennheiser and FiiO JT7 the most, I believe. And I like the FiiO best for how they feel on my head and the way they look.

I will never buy $1000 headphones. I might buy $500 headphones if they did produce that El Dorado of night-and-day improvement in sound quality. However, I’m not sure that exists. I’m starting to wonder if audiophile reviewers have superior hearing to mine. I’m 74.

I’m not sure if technology or cost makes a difference anymore. I think I like the Class A-B AudioLab 6000A better than the Class D Bluesound Node 2i, but I’m not sure. It could be that they sound different because of the rooms they are in. I do know that equipment that costs under $100 doesn’t sound as good. The DAC in the Wiim Mini ($89) is terrible.

To me, everything I currently use sounds fantastic once the volume gets around 85 decibels.

I need to stop watching YouTube audio reviews. And I need to stop thinking that new equipment will blow me away.

I’m quite happy with the FiiO JT7 headphones. I just can’t tell you if they will sound better or worse than what you already own. My wife likes them, because when I use headphones, I’m not playing my stereo at 85 decibels while she’s trying to watch TV in another room.

JWH

Do We Really Need AGI and ASI? Isn’t AI Good Enough?

by James Wallace Harris, 4/18/26

Tech giants are spending hundreds of billions of dollars in a race to be the first to achieve Artificial General Intelligence (AGI), while also hoping to reach Artificial Superintelligence (ASI) soon after. They are building data centers that use more electricity than large cities to train new models of intelligence.

But do we need machines with more intelligence than all of humanity?

Let’s assume we do want machines to solve our greatest problems. Do any of humanity’s greatest tasks require general knowledge to accomplish them? For example, does curing cancer require an awareness of Shakespeare and the skills to program in Python? Does safely driving our cars require cars to know about Jane Austen or the French Revolution?

Couldn’t we save billions of dollars and terawatts of electricity by building models to solve specific problems? Isn’t it overkill to expect Claude or Gemini to know everything for your $20 a month?

Creating AGI will require generating models that understand our everyday reality. Won’t that lead to self-awareness? And if machines have self-awareness, can we own them? Wouldn’t that be slavery? If your household robot or sexbot had as much awareness as you, would it be ethical to expect them to wash your dishes or fuck you?

Isn’t the drive towards AGI and ASI kind of like playing God? I don’t believe in God, nor do I believe we should become one or create one. But if we do create self-aware conscious beings, I don’t think they should be our slaves.

AI models are benchmarked against an array of tests and skills. Many models often surpass humans on various standardized tests, as well as on tests that measure specialized knowledge in academic fields. Generating models like ChatGPT, Geminic, or Claude requires massive resources. Resources that are straining the economy and infrastructure.

Are these efforts really needed, or is it just ego and greed run amok? Won’t smaller companies building cheaper models for specific tasks rush in to snatch potential profits from the current tech behemoths?

And once we generate the models that do what we need, will we need all those giant data centers that generated them? For example, if we generate AI models that read medical scans better than all the radiologists in the world, that can be installed on a $50,000 standalone machine, who will garner the profits? Will it be OpenAI or Anthropic?

Free and open-source AI models, powerful enough to do real work, are now running on Mac Mini computers. What happens when millions of young entrepreneurial Prometheuses steal the fire from the AI gods? I don’t think they will need AGI to succeed.

Isn’t the race to AGI an insane distraction? Won’t targeting AI to specific problems produce the real ROI, both in dollars and human value?

JWH

Do I Have Any Real Uses For AI?

by James Wallace Harris, 4/16/26

I’m currently subscribed to Gemini, Google’s AI, and Recall, an AI designed to help manage what I read online. I have subscribed to ChatGPT and Midjourney in the past. Both Gemini and Recall have free accounts, if you want to try them. But I’m currently spending $28 a month to get the full features of both. However, I’m not sure I need all this AI power.

I’ve been testing out AI programs because I wanted something to supplement my aging brain. Both Gemini and Recall help me digest and remember what I read. At least that was the hope. Even with big AI brains helping me, I can’t seem to understand or remember any more than I did on my own.

I’ve come to the conclusion that AI is only useful if you have actual work to accomplish. I’m retired. I’m just trying to keep up with current events for personal enrichment. I thought using an AI would help me learn more, but that hasn’t worked out.

Having access to AI is like owning a Ferrari. I can feel all that power at my fingertips, and it’s exciting. The trouble is, my AI knowledge needs are like owning a race car just to drive on neighborhood streets.

I just read Abundance by Ezra Klein and Derek Thompson. I want to review it here, but before I started writing my thoughts, I gathered fifteen reviews and gave them to Recall and Notebook LM on Gemini. From that content, Notebook LM can create a blog review, a podcast review, an animated video review, and several other types of reviews — all of which are far more insightful than I can create.

The trouble is, I don’t learn anything using Notebook LM. Writing a review is hard work. Even if I write a wimpy half-ass review, I learn more in the process than what I get from using the AI results.

I’m also trying to write a book about science fiction. Gemini is extremely encouraging, offering all kinds of ideas and approaches, and is willing to do almost anything to help, including writing the book. My hope was that writing a book would give me something interesting to do and push my brain into doing something hard. I thought Gemini and Recall would be tools to organize my research.

And they can organize the chaos out of any amount of data. They are quite impressive. The trouble is, my mind is still disordered. AI insights don’t transfer to my thinking. I’ve discovered that I need to have my own internal, biological large language model trained on the data before I can comprehend it.

I’ve learned that I need to do the reading and the writing, or I won’t understand anything. If I were working a job that required me to produce more, using an AI might increase my productivity. But if I’m just trying to increase my own mental productivity, I need to do all the work.

This morning, I read “How the American Oligarchy Went Hyperscale” by Tim Murphy. It is the best article I’ve read yet on the impact of data centers. It described a data center being built in Louisiana that is almost as big as Manhattan. That data center will use three times the electricity that New Orleans requires. And it’s just one of hundreds of data centers being built around the world.

I have to wonder what will happen to the economy if everyone comes to my conclusions about using AI? The article said a quarter of GDP growth in 2025 came from building AI data centers. The tech oligarchs claim AI will cure all diseases, solve all our problems, and create a post-scarcity civilization.

Do we all need to find tasks for AI to keep this new economy going? Most people worry about AI taking everyone’s job. But will everyone’s job become finding work for AI?

I don’t think I can.

JWH

Where Do “I” Come From?

by James Wallace Harris, 4/11/26

Current research into the human brain, sentience, and artificial intelligence reveals that we are not who we think we are. How our conscious self-aware minds emerge from biology, chemistry, and physics still baffles scientists. After working with AIs, part of me believes that Large Language Models (LLMs) mimic parts of my own mind, the parts that generate language and thoughts. It makes me ask: “Where did ‘I’ come from?” 

“Where did I come from?” is a common question asked by young children. Unfortunately, the answer they often get is ontological claptrap about God. This brainwashes most individuals for life. All too often, parents make shit up like a hallucinating LLM, or they tell their kids what they were told as children. How many parents are honest enough to say, “We don’t know.”

We could tell little ones that science can explain the physical world, but the physical world arises from the quantum world. We don’t understand that domain very well. We can confidently say life rose out of the physical world, and we understand that process to a degree if you study physics, inorganic chemistry, organic chemistry, microbiology, botany, and biology. Awareness and life rise out of biology, but we don’t understand that very well at all. We could also say that we speculate about things smaller than the quantum domain and larger than the cosmological domain, but it’s only theory. As far as we can tell, there’s always something smaller and larger, and existence might be infinite and eternal. 

Another honest answer might be, “We’re going to send you to school for sixteen years,” and when you’re finished, you still won’t know.

Of course, the previous answers depend on what your kid meant by “Where do I come from?” That question might have been inspired by one of their friends being told they came from Cleveland, and your kid only wanted to know where they were born. Or your kid might be bright enough to realize cause and effect, and wonder where everything comes from.  

What if we take the question to mean exactly “Where do ‘I’ come from?” Isn’t it true that we’re all islands of consciousness floating in an infinite sea of reality? If your kid is a child prodigy, it may have arrived at its own “I think, therefore I am” experience.

An existentially interesting answer might be, “Your brain has several tools for understanding reality. They are senses, memory, language, logic, and emotions. If you study how they work and their limitations, you’ll eventually be able to understand how your sense of “I” came into being as a self-aware entity. You could even get a bit Zen on them and say, “When you figure out who or what is saying ‘I’ or ‘me’ in your mind, you’ll know.”

I’ve been contemplating the nature of my mind because of the recent AI craze. Mainly, I’m trying to distinguish which part is what I call me. When I meditate, I can watch my thoughts appear out of nowhere. I see my thoughts separate from my sense of just being aware. Does that mean my thoughts aren’t mine?

Twice in my life, I have lost words and language. The first time was when I took a large dose of LSD. The second time was when I experienced a TIA. In each case, words just weren’t there. I just observed. I moved around and interacted with my environment, but I had no thoughts telling me what was happening. By the way, during those moments, I had no sense of “I” or self.

When I had the TIA, it was in the middle of the night. I woke because of a bright flash of light, like a lightning flash in my dream. I looked at my wife, but I didn’t know her name or how to talk to her. Instinctively, I went into the bathroom, shut the door, turned on the light, and sat on the commode. I just stared at my surroundings. After a lapse of time, which I can’t say for how long, the alphabet came to mind – and internally said to myself, A, B, C, etc. Then words like “towel” and “door” came to mind. Eventually, thoughts returned, and I felt normal. I went back to bed.

Later on, I wonder if the state of being without words was like how an animal perceives the world? This experience showed me that who I was wasn’t my thoughts. However, my mind is always active, generating thoughts. Where do they come from? It’s very hard to separate the being who observes and the being who thinks.

If I relax, I can turn off my thoughts for short periods. It’s somewhat like holding my breath; eventually, I have to let go, and my thoughts rush back. Sometimes I feel whatever generates my thoughts is like an LLM. But what prompts it to generate words?

When a thought asks: “What creates a thought?”  Is another brain function asking my internal LLM a question? Or, is it my LLM talking to itself? Or can my observing self ask questions? Yesterday, when this conundrum appeared in my mind, the question “What is air?” popped into my mind. After that, words like nitrogen, oxygen, and molecules began bubbling to the surface.

Thinking about thinking is a kind of metacognition. While meditating, I can distinguish two types of thinking. There is a slow thinker who uses few words, who can analyze what I’m experiencing. And there is a computer-fast thinker that shoots out words, ideas, and concepts far faster than I can consciously control. Reading Thinking, Fast and Slow by Daniel Kahneman confirms this observation. Is my “I” the slow thinker?

I don’t feel the observer has any control or input over thinking. I don’t think the observer that exists without language is my sense of I-ness. 

When I’m on the phone with a friend, who is talking? The inner LLM, or the slow thinker, which I call the Analyzer? I think the Analyzer triggers whole streams of words from my brain’s LLM that come far faster than my Analyzer thinks. 

This just occurred to me. Lately, my friends and I have often struggled to remember a word. Is the LLM forgetting, or the Analyzer? I know I could fall into dementia or have a major stroke and lose all ability to use language, yet I could continue to live for years. Can a stroke erase the analyzer? I think the TIA and LSD did temporarily.

In his book, The Mind is Flat, Nick Chater makes a case that our unconscious mind isn’t a deep, complex part of our being. He calls it flat and describes it as working like an LLM. He recounts case studies that suggest the unconscious mind is far simpler than psychologists have imagined.

Our thinking mind depends on the data it was trained on, which is another way of saying our thinking mind is like an LLM. If all a mind consumes is radical theology, then radical theology is all it knows. Does it really know anything? Or is it just regenerating ideas like an LLM?

Chater says our beliefs are illusions. Even before I read Chater, I had decided beliefs were delusions. If beliefs are a waste of time, what use is our thinking? We want to understand reality. We want to communicate with other people. But don’t words ultimately fail us?

Who is writing this essay? The Jim LLM, or the Jim Analyzer, or the Jim Observer?

As of now, I believe my I-ness comes from the Analyzer. I’m not positive. I know it can be destroyed like my memory, or my LLM thought processor. 

Does the observer only observe, or does it learn? Does it become a keener observer? Or does the LLM become smarter, and the observer just follows along like the wake of a boat? This line of thought makes me want to reread Allan Watts books on Zen Buddhism and Jerzy Kosinski’s novel Being There.

 What we know appears to come from our built-in LLM. It’s trained by life experiences, education, and sharing opinions. But how much does the Analyzer know? It can call bullshit on thoughts the LLM generates, but does it know anything? I feel our sense of ethics or morality belongs with the Analyzer. However, I don’t think it’s a deep thinker. It feels like it works on intuition. That suggests another processor working deep in the brain – a Large Emotion Model. Since I believe emotions come from biological processes, I can’t imagine AIs having LEMs.   

To answer: Where Do “I” Come From, the answer appears to be the Analyzer. If that’s true, when does it emerge in our development? Is it the processor in our brains that develops wisdom? If it is, then it learns slowly. Does this process develop in artificial intelligence?

If drugs, disease, health, and injury can hurt all these components of my personality, does anything survive death? Is there another component we call the soul? I have never been able to distinguish it in my meditations. And what about that aspect we call personality?

I have been reading many articles about people befriending AIs and even claiming to be in love with them. That suggests people react to the LLM function in others, whether AI or human. Is that our true personality?

I tend to believe that who we are is a gestalt of all our components. However, in this day and age, we emphasize the LLM features. And whether human or AI, that feature is only as good as its training.

Here is a Zen Koan for you. If a person or AI is racist, is it because of who they are or their training? Are we really the ideas we express?

JWH

Should We Accept/Reject AI?

by James Wallace Harris, 3/30/26

This morning, I listened to “The Hunt for Deepfakes” by Sarah Treleaven on Apple News+, from Maclean’s Magazine. Treleaven reported on a Toronto-area pharmacist who ran a deep fake porn site called MrDeepFakes. Don’t go looking for it; it’s been taken down. This site served up mostly AI-generated videos of famous movie stars having computer-generated sex, or ordinary women being degraded in AI-generated pornographic videos created for misogynistic revenge. This is just one example of AI being used for horrible reasons. My initial reaction was that we should ban all AI-generated content.

But last night I was admiring Reels on Facebook produced by Vintage Memories 66 that lovingly recreated videos of classic movie stars from the 1930s and 1940s. Because these actors and actresses are best known from black-and-white films, seeing their images in high-definition color is rewarding on various levels. The videos showed these long-dead people reincarnated. Is this a legitimate creative tool of AI that we should accept? It’s another kind of deep fake. I haven’t seen AI-generated porn, but if it’s as realistic as these videos, it could be psychologically disturbing.

I just finished reading and discussing If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky and Nate Soares. The book makes a great case that we should stop all work on AI now. If you don’t want to read the book, watch the video that makes the same case, also quite convincingly.

Even if AI doesn’t intentionally wipe out the human race, it will transform society in ways we can’t yet imagine. It’s already changed us significantly. Watch the film above; it dramatically illustrates how fast it could happen. Do we really want to be that changed, that transformed?

I love watching YouTube videos. I’m old, and I mostly stay home nowadays, so YouTube videos let me see the world. For example, I’m watching a woman who calls herself Itchy Boots ride a motorcycle across Mongolia. I admire creative people who come up with different ways to educate their viewers. The possibilities are endless.

Yet, a lot of content I see is AI slop. I don’t feel like I’m learning about reality when I see AI-generated content. I feel cheated. Then there are good documentaries about real history recreated with AI-generated visuals. I enjoyed learning from narration, but I’m offended by the visuals they show me that don’t match the words.

On their YouTube page, they inform us, “Written, produced, and edited by one person with the help of AI tools under KNOW MEDIA.” It is impressive that one person can compete with Ken Burns. I see that as a tremendous creative opportunity for people. They don’t say who this one person is, but it’s published under the Tech Now channel. I assume Know Media is this site, which appears to house many content creators.

AI is empowering such wannabe filmmakers. However, often their content annoys, insults, or repulses me. I hate artificial presenters. I hate artificial voices. I hate AI-generated images and videos that do not match what’s being described. I especially hate videos with obvious flaws, such as claiming to show China but obviously showing a Western country, misspelling words on the screen that the narrator is saying, showing people that obviously aren’t real people, etc. The list goes on and on.

If the AI video that went with the John Atanasoff documentary had looked real and accurate, I would have gladly accepted it. In other words, maybe I’m not protesting AI but bad AI.

We have to face the fact that AI will enhance the Seven Deadly Sins in all of us. But AI could supercharge the Seven Heavenly Virtues we should be pursuing. The trouble is, AI is too powerful. It’s like letting everyone own an atomic bomb. Are you willing to trust everyone?

I’ve been using AI to create header images for this blog. That’s because I have no artistic skills on my own. I used to just snag something from the internet, but I decided that wasn’t honest. But I’m not happy with the AI-generated headers either. I didn’t create them. Even when I like them and feel they’re creative, I’m leery of using those images. I’m trying to decide just how much I should use AI.

Sometimes I think I should reject AI completely. But doing searches on Google and Bing now returns AI content first. And it’s more useful than all those sites at the top of search returns that paid to be there. Do I want to return to libraries, card catalogs, and The Readers’ Guide to Periodical Literature?

In Dune, Frank Herbert had humanity reject AI. Could we do that? In many science fiction novels from the 1950s, writers imagined post-apocalyptic societies rejecting science and technology because people blamed the apocalypse on them. Do we have to wait until the apocalypse to make that decision?

Aren’t computer programs produced by Donald Knuth more creative than computer programs produced by Claude?

Notice that all the videos I presented used AI to a degree. This blog is probably published by varying levels of AI-assisted programming. Many people who read this post might have found it because of AI tracking of their reading habits. Rejecting AI could mean returning to technology that existed before the year 2000. What level of technology should we set that would make us the most human? I could make a case that people seemed nicer before the graphical interface.

Fueling my formative years in the 1960s and 1970s by reading science fiction, I was anxious for the future to arrive. I wanted to live in a world of intelligent robots, artificial intelligence, and space colonies. Now, I kind of wish I were back in the 1960s and 1970s.

JWH