FiiO JT7 $119 Planar Magnetic Headphones

by James Wallace Harris, 4/21/26

I’m a sucker for audiophile reviews that claim a new product sounds great for the money. My current headphones are Sennheiser HD 560 S, which sound wonderful, and I was completely happy playing through a FiiO K5Pro headphone amplifier. But then I saw several reviews praising the FiiO K13 R2R Desktop DAC & Headphone Amp. I’ve always wanted to try an R2R DAC, so I bought it. (See my review.)

The Fiio K13 R2R was very good, but it didn’t produce that night-and-day difference I was expecting. After years of seeing reviews of planar magnetic headphones, I’ve wanted to try them too, hoping the technology would take my music to a new level. That’s why I ordered the FiiO JT7 headphones. Plus, they offered two sets of cables, one of which worked with balanced circuits. In other words, the JT7 had two tech upgrades to try.

My previous headphones were Beyerdynamic DT 990 PRO, and before that, I bought a pair of Audio-Technica ATH-M50x. Astute observers will notice that all of this equipment originally cost between $100-$200. I’m not sure if that isn’t the limiting factor determining the sound quality.

Between four headphones and two headphone amplifiers, I got a range of treble and bass responses, sound staging, and musical details. But nothing was ever night-and-day. I can say R2R sounds smoother than the ESS Delta-sigma DAC, but the Delta-sigma DAC had more detail. I can say my open-back headphones have a larger soundstage than closed-back headphones.

After switching between the four headphones and two headphone amps, the biggest factor in determining what I liked was power. The DT990s have 250 Ohm impedance. The Sennheisers have 120 Ohms. The ATH-M50x are 36 Ohms. And the JT7 is just 18 Ohms, but with a relatively low sensitivity of 92 dB/mW.

Comparing these four headphones is very difficult because once I got them to the same volume, which is very subjective, they were hard to tell apart. Like I said, soundstage and instrument placement varied, but mainly between the open-back and closed-back headphones. The overall tone varied between the K5Pro and K13. That was because of the ESS and R2R DACs.

I don’t know if I can ever find a true night-and-day difference in my audio equipment unless I spend a great deal more money. But the headphones don’t sound significantly different through my Bluesound Node 2i or AudioLab 6000A amplifiers, both of which cost over a $1000.

I’ve listened to “True Love” by Anna Ash so many times that I’m not sure which headphones actually delivered better sound quality. There are so many variables. I prefer the Sennheiser and FiiO JT7 the most, I believe. And I like the FiiO best for how they feel on my head and the way they look.

I will never buy $1000 headphones. I might buy $500 headphones if they did produce that El Dorado of night-and-day improvement in sound quality. However, I’m not sure that exists. I’m starting to wonder if audiophile reviewers have superior hearing to mine. I’m 74.

I’m not sure if technology or cost makes a difference anymore. I think I like the Class A-B AudioLab 6000A better than the Class D Bluesound Node 2i, but I’m not sure. It could be that they sound different because of the rooms they are in. I do know that equipment that costs under $100 doesn’t sound as good. The DAC in the Wiim Mini ($89) is terrible.

To me, everything I currently use sounds fantastic once the volume gets around 85 decibels.

I need to stop watching YouTube audio reviews. And I need to stop thinking that new equipment will blow me away.

I’m quite happy with the FiiO JT7 headphones. I just can’t tell you if they will sound better or worse than what you already own. My wife likes them, because when I use headphones, I’m not playing my stereo at 85 decibels while she’s trying to watch TV in another room.

JWH

Why Are We Sending Humans To the Moon When We Could Send Humanoid Robots?

by James Wallace Harris, 4/20/26

Millions were thrilled by the Artemis 2 circumlunar mission. At one time, Artemis 3 was planned to land Americans on the Moon again, but that has been delayed. Now they are talking Artemis 5. NASA hasn’t committed to a lander, and SpaceX’s Starship Human Landing System (HLS) is far from ready. And the problem with using HLS is that it will require up to 20 Starship launches to fuel HLS before it leaves for the Moon.

Why are we going back to the Moon? What will we get for our money? Is it scientific research? Is it to build a permanent base on the Moon? Mine Lunar resources? Or is it just another space race, but this time with China? Wasn’t the real goal Mars?

Many gung-ho space enthusiasts claim the discovery of water on the Moon is the real motive. Water can produce oxygen and hydrogen. Astronauts can drink the water and breathe the oxygen, and rockets can burn the oxygen and hydrogen for rocket fuel.

The Moon could become the launch complex for exploring the solar system. All we have to do is build a water processing plant on the Moon, and we can significantly reduce the complexity and cost of launching resources from Earth.

Most of the weight of a Moon mission comes from rocket fuel. Next is the air and water needed to keep the astronauts alive. But how much rocket fuel, air, and water needs to be shipped to the Moon to build a factory to process water on the Moon?

It would be a lot less if we sent humanoid robots. Robots don’t need to return, so we don’t need that rocket fuel for a return flight either.

Humanoid robots are evolving at a tremendous pace, along with AI. Before NASA has a lunar lander ready, they will probably have evolved to do all the work a human astronaut could do on the Moon.

Think about it. What if SpaceX’s HLS were refueled in Earth orbit from fuel brought from the Moon? That could save up to 20 gigantic Starship launches. And what if robots could eventually manufacture rockets on the Moon? Then astronauts could go into orbit on a SpaceX Dragon capsule launched with a Falcon 9, and transfer to a lunar-built rocket to travel to the Moon.

Landing humanoid robots on the Moon might be done without the complication of SpaceX Starship launches.

This would delay humans returning to the Moon, but in the long run, it would jump-start solar exploration. Robots could build habitats on the Moon for people, fill them with air and water, set up the environment, grow food, and get everything ready for human visitors. Robots could build the infrastructure for sending humans to Mars from the Moon.

We should be able to build robots that can withstand the heat and cold of space, endure the high radiation, and work with the dangerous regolith on the Moon and Mars.

Humans could watch through robotic eyes if we set up communication relay satellites orbiting the Moon. Imagine putting on an AR headset and seeing the Moon from human eye level with 4K eyes? The robots would be autonomous but also capable of working with humans.

I’m not sure the public will pay for building long-term human settlements on the Moon. The novelty of people on the Moon wears off quickly. Using robots is so much cheaper. Biological beings aren’t designed to explore space, but robots are perfect.

We really need to think about what we want from exploring the solar system. Is giving a few humans the thrill of going where no human has gone before? Haven’t the Hubble and James Webb telescopes given us so much more than manned missions? Personally, I’d love to see through the eyes of a robot working on the Moon and Mars rather than watch films of another human having all the fun.

But what I’d really love is giant space telescopes spaced across the solar system working as an astronomical interferometer. That would allow us to directly observe planets orbiting distant suns and spectrographically measure their atmospheres. That would be our best chance to discover alien intelligent life. Robots would be perfect for building such structures.

JWH

Do We Really Need AGI and ASI? Isn’t AI Good Enough?

by James Wallace Harris, 4/18/26

Tech giants are spending hundreds of billions of dollars in a race to be the first to achieve Artificial General Intelligence (AGI), while also hoping to reach Artificial Superintelligence (ASI) soon after. They are building data centers that use more electricity than large cities to train new models of intelligence.

But do we need machines with more intelligence than all of humanity?

Let’s assume we do want machines to solve our greatest problems. Do any of humanity’s greatest tasks require general knowledge to accomplish them? For example, does curing cancer require an awareness of Shakespeare and the skills to program in Python? Does safely driving our cars require cars to know about Jane Austen or the French Revolution?

Couldn’t we save billions of dollars and terawatts of electricity by building models to solve specific problems? Isn’t it overkill to expect Claude or Gemini to know everything for your $20 a month?

Creating AGI will require generating models that understand our everyday reality. Won’t that lead to self-awareness? And if machines have self-awareness, can we own them? Wouldn’t that be slavery? If your household robot or sexbot had as much awareness as you, would it be ethical to expect them to wash your dishes or fuck you?

Isn’t the drive towards AGI and ASI kind of like playing God? I don’t believe in God, nor do I believe we should become one or create one. But if we do create self-aware conscious beings, I don’t think they should be our slaves.

AI models are benchmarked against an array of tests and skills. Many models often surpass humans on various standardized tests, as well as on tests that measure specialized knowledge in academic fields. Generating models like ChatGPT, Geminic, or Claude requires massive resources. Resources that are straining the economy and infrastructure.

Are these efforts really needed, or is it just ego and greed run amok? Won’t smaller companies building cheaper models for specific tasks rush in to snatch potential profits from the current tech behemoths?

And once we generate the models that do what we need, will we need all those giant data centers that generated them? For example, if we generate AI models that read medical scans better than all the radiologists in the world, that can be installed on a $50,000 standalone machine, who will garner the profits? Will it be OpenAI or Anthropic?

Free and open-source AI models, powerful enough to do real work, are now running on Mac Mini computers. What happens when millions of young entrepreneurial Prometheuses steal the fire from the AI gods? I don’t think they will need AGI to succeed.

Isn’t the race to AGI an insane distraction? Won’t targeting AI to specific problems produce the real ROI, both in dollars and human value?

JWH

ABUNDANCE by Ezra Klein and Derek Thompson

by James Wallace Harris, 4/17/26

When I bought Abundance by Ezra Klein and Derek Thompson, I assumed it would be about creating a post-scarcity society. Instead, it’s about the supply-side progressivism. A post-scarcity society was a concept created by futurists and embraced by science fiction writers. It’s based on the idea that technology could produce such a surplus of everything that it would invalidate capitalism. It turns out supply-side progressivism (or the abundance movement) is somewhat related, but a smaller subset of post-scarcity.

The book Abundance originated with an essay by Klein in The New York Times and an essay by Thompson in The Atlantic. Before buying the book, I suggest reading those two essays and the Wikipedia entry. If you still feel a need to deep dive into this subject, the book is where to go. 40% of my Kindle edition is references and index. Klein and Thompson have done a massive amount of research.

Basically, Klein and Thompson are liberals attacking the government for too much regulation, and telling liberals that some of those laws designed to help people for liberal reasons are now hurting people that liberals also want to help.

The two cases Klein and Thompson focus on are finding homes for the homeless and for people who can’t afford one, and making healthcare more affordable. They go into great detail about how zoning laws are keeping us from solving the housing problem. The second focus is on how the federal government is now stifling innovation.

I agree that zoning laws keep us from solving housing problems, but I don’t think undoing those laws is possible or the full solution. I thought San Francisco was the wrong city to analyze, and considered Houston an unfair counter-example. San Francisco’s growth is limited by geography, and Houston has endless sprawl, so zoning may not be the defining factor.

I believe wealth and greed control zoning laws, and that’s not going to change. The American tech oligarchs have no trouble quickly building giant data centers, even when they face significant protests. I don’t think asking average Americans who are NIMBYs to become YIMBYs is a fair request. Or one that will bring about change.

I found their story of Katalin Karikó far more fascinating. I especially recommend chapters 4 and 5 on Invent and Deploy.

Karikó spent years submitting research proposals to study mRNA, which were routinely rejected because those who decided who received research grants didn’t think mRNA was worth studying. Yet, years later, her research led governments and pharmaceutical companies to develop Covid vaccines within one year, even though it normally takes years to develop a new vaccine.

Klein and Thompson praise the quick development of the mRNA vaccine under the Trump administration and wonder why Trump never took credit for it. They guess that Trump didn’t want to promote a huge success for big government, and a success for vaccines to his anti-government, anti-vax followers. They do recommend the book Warp Speed: Inside the Operation That Beat COVID, the Critics, and the Odds by Paul Mango. It proves how successful governments can create abundance when the need arises.

Klein and Thompson show how the federal government wastes huge amounts of money on scientific research through its current procedures and often backs the wrong research. They give a history of how the federal government was successful in the past but is now confined by policies and regulations.

Modern liberal politics is made possible by invention. Almost every product or service that liberals seek to make universal today depends on technology that did not exist three lifetimes ago—or, in some cases, half a lifetime ago. Medicare and Medicaid guarantee the elderly and poor access to modern hospitals, where many essential technologies—such as plastic IV bags, MRI and CT scan machines, and pulse oximeters—are inventions of the last sixty years. It is tempting to say that, with these essentials already in existence, it is time for society to focus at last only on the fair distribution of existing resources rather than the creation of new ideas. But this would be worse than a failure of imagination; it would be a kind of generational theft. When we claim the world cannot improve, we are stealing from the future something invaluable, which is the possibility of progress. Without that possibility, progressive politics is dead. Politics itself becomes a mere smash-and-grab war over scarce goods, where one man’s win implies another man’s loss.

The world is filled with problems we cannot solve without more invention. In the fight against climate change, the clean energy revolution will require building out the renewable energy that we have already developed. But decarbonization will also require technology that doesn’t exist yet at scale: clean jet fuel, less carbon-intensive ways to manufacture cement, and machines to remove millions of tons of carbon from the atmosphere.

In health care, the last few centuries of invention have turned a death planet—where disease ran rampant and, before 1850, one in two babies perished before their sixteenth birthday—into a world where people can look forward to generation-over-generation increases in life expectancy. But there are still so many mysteries that require fresh breakthroughs. We’ve made disappointingly little progress with many cancers. Complex diseases like Alzheimer’s and schizophrenia elude treatment or even basic comprehension. The cellular process of aging is a deep mystery. We still don’t have effective vaccines for adult tuberculosis or hepatitis C, or vaccine platforms that we can immediately scale up in the event of a new pandemic. Decades from now, our children may gawk in horror that people with chronic pain or lingering illness in the early twenty-first century couldn’t take a simple all-purpose saliva or blood test to answer the basic question Why do I feel sick? If disease is a universe of mysteries, we have scarcely explored one minor solar system of its cosmos.

Inventions that may seem outlandish today may soon feel essential to our lives. Streets filled with electric self-driving cars that give us mobility without emissions and free us from the vast number of deaths caused by faulty human reflexes or judgment. Gigantic desalination facilities that transform our oceans into drinkable tap water. An economy with robots that build our houses and machines that take on our most dangerous and soul-draining work. Wearable devices to scan our bodies for diseases. Vaccines that we can rub on our skin rather than inject at the end of a needle. As unrealistic, or even ludicrous, as some of these ideas might seem, they are not much more ludicrous than a rejected, ignored, and unfunded mRNA theory that came out of nowhere to save millions of lives in a pandemic. To make these things possible and useful in our lifetime requires a political movement that takes invention more seriously.16

So, where is that movement? Invention rarely plays a central role in American politics. In health care, for example, Democrats have spent decades fighting for universal insurance, while Republicans have consistently fought its expansion. But while the dominant fight in Washington is typically about how we buy health care, we rarely talk about the health care that exists to be bought. After all, in the future, progressives don’t just want everyone to have an insurance card; they want that card to provide access to a world of treatments that liberates patients from unnecessary disease and debilitating pain. Technology expands the value of universalist policies.

If progressives underrate the centrality of invention in their politics, conservatives often underrate the necessity of government policy in invention. “The government has outlawed technology,” the investor and entrepreneur Peter Thiel said in a debate with Google CEO Eric Schmidt in 2014, echoing a popular view among techno-optimists and libertarians that government laws mostly block innovation. But many of Silicon Valley’s most important achievements have relied on government largesse. Elon Musk is now a vociferous critic of progressive policy. But he has also been a beneficiary of it. In 2010, when Tesla needed cash to launch its first family-friendly sedan, the Model S, the company received a $465 million loan from the Obama administration Department of Energy.17 His rocket-launching company, SpaceX, has received billions of dollars from NASA under Democratic and Republican administrations. Musk has become a lightning rod in debates over whether technological progress comes from public policy or private ingenuity. But he is a walking advertisement for what public will and private genius can unlock when they work together.

Beyond merely regulating technology, the state is often a key actor in its creation. An American who microwaves food for breakfast before using a smartphone to order a car to take them to the airport is engaging with a sequence of technologies and systems—the microwave, the smartphone, the highway, the modern jetliner—in which government policies played a starring role in their invention or development. Federal science spending is so fundamental to the overall economy that a 2023 study found that government-funded research and development have been responsible for 25 percent of productivity growth in the US since the end of World War II.18 “There is widespread agreement that scientific research and invention are the key driver of economic growth and improvements in human well-being,” the Dartmouth economist Heidi Williams said. “But I think researchers do a poor job of communicating its importance to lawmakers, and lawmakers do a poor job of making science policy a major focus.”19

The pandemic proved the necessity of invention yet again. The mRNA COVID vaccines saved millions of lives and spared the US more than $1 trillion in medical costs.20 But they might have never existed if it weren’t for Karikó’s force of will—and the cosmic luck of an extremely well-placed Xerox machine.

Klein, Ezra; Thompson, Derek. Abundance (pp. 134-137). Avid Reader Press / Simon & Schuster. Kindle Edition.

Ultimately, Abundance brings little hope. I think the book showed too many examples of how we can’t create abundance and why. It thoroughly convinced me that our current political evolution is in the wrong direction.

Yes, Katalin Karikó and mRNA are shining examples of what’s possible, but one great example does not prove that change will happen. All the other examples Klein and Thompson used were from history, suggesting that Americans will step up to the plate when they face a great challenge, but not in ordinary times.

AI and data centers are a major challenge, and we aren’t stepping up. Please read “How the American Oligarchy Went Hyperscale” by Tim Murphy. Greed drives us. Klein and Thompson even use examples of how monetary prizes can be used to solve problems.

The Tech Bro Oligarchy promises a post-scarcity society with AI, which is the kind I was expecting the book Abundance to be about. But I don’t believe in that kind either. At 74, I doubt the pie-in-sky dreams science fiction promises. Just because we live in science-fictional times doesn’t mean they’ll lead to science-fictional futures.

AI-generated abundance will ruin us. Old-fashioned human-generated abundance is possible, but greed will always keep the wealthy from sharing it.

p.s.

This essay was not written with any help from AI. All the ideas are my own. But are they? My ideas come from reading books and magazines. I train my mind on information just like AIs are trained. I’ve cancelled my AI subscriptions. I’m putting that money into buying more books and magazines. Reading Abundance did me more good for my mind than reading what AI has to say about it. Gemini produced excellent summaries, but they didn’t stick in my mind.

Grinding through the book word by word will not help me remember everything, but I do think it helps me remember more than reading AI summaries. But in the long run, what’s important to remember is that we could live in a saner, more compassionate society.

JWH

Do I Have Any Real Uses For AI?

by James Wallace Harris, 4/16/26

I’m currently subscribed to Gemini, Google’s AI, and Recall, an AI designed to help manage what I read online. I have subscribed to ChatGPT and Midjourney in the past. Both Gemini and Recall have free accounts, if you want to try them. But I’m currently spending $28 a month to get the full features of both. However, I’m not sure I need all this AI power.

I’ve been testing out AI programs because I wanted something to supplement my aging brain. Both Gemini and Recall help me digest and remember what I read. At least that was the hope. Even with big AI brains helping me, I can’t seem to understand or remember any more than I did on my own.

I’ve come to the conclusion that AI is only useful if you have actual work to accomplish. I’m retired. I’m just trying to keep up with current events for personal enrichment. I thought using an AI would help me learn more, but that hasn’t worked out.

Having access to AI is like owning a Ferrari. I can feel all that power at my fingertips, and it’s exciting. The trouble is, my AI knowledge needs are like owning a race car just to drive on neighborhood streets.

I just read Abundance by Ezra Klein and Derek Thompson. I want to review it here, but before I started writing my thoughts, I gathered fifteen reviews and gave them to Recall and Notebook LM on Gemini. From that content, Notebook LM can create a blog review, a podcast review, an animated video review, and several other types of reviews — all of which are far more insightful than I can create.

The trouble is, I don’t learn anything using Notebook LM. Writing a review is hard work. Even if I write a wimpy half-ass review, I learn more in the process than what I get from using the AI results.

I’m also trying to write a book about science fiction. Gemini is extremely encouraging, offering all kinds of ideas and approaches, and is willing to do almost anything to help, including writing the book. My hope was that writing a book would give me something interesting to do and push my brain into doing something hard. I thought Gemini and Recall would be tools to organize my research.

And they can organize the chaos out of any amount of data. They are quite impressive. The trouble is, my mind is still disordered. AI insights don’t transfer to my thinking. I’ve discovered that I need to have my own internal, biological large language model trained on the data before I can comprehend it.

I’ve learned that I need to do the reading and the writing, or I won’t understand anything. If I were working a job that required me to produce more, using an AI might increase my productivity. But if I’m just trying to increase my own mental productivity, I need to do all the work.

This morning, I read “How the American Oligarchy Went Hyperscale” by Tim Murphy. It is the best article I’ve read yet on the impact of data centers. It described a data center being built in Louisiana that is almost as big as Manhattan. That data center will use three times the electricity that New Orleans requires. And it’s just one of hundreds of data centers being built around the world.

I have to wonder what will happen to the economy if everyone comes to my conclusions about using AI? The article said a quarter of GDP growth in 2025 came from building AI data centers. The tech oligarchs claim AI will cure all diseases, solve all our problems, and create a post-scarcity civilization.

Do we all need to find tasks for AI to keep this new economy going? Most people worry about AI taking everyone’s job. But will everyone’s job become finding work for AI?

I don’t think I can.

JWH