by James Wallace Harris, 4/18/26
Tech giants are spending hundreds of billions of dollars in a race to be the first to achieve Artificial General Intelligence (AGI), while also hoping to reach Artificial Superintelligence (ASI) soon after. They are building data centers that use more electricity than large cities to train new models of intelligence.
But do we need machines with more intelligence than all of humanity?
Let’s assume we do want machines to solve our greatest problems. Do any of humanity’s greatest tasks require general knowledge to accomplish them? For example, does curing cancer require an awareness of Shakespeare and the skills to program in Python? Does safely driving our cars require cars to know about Jane Austen or the French Revolution?
Couldn’t we save billions of dollars and terawatts of electricity by building models to solve specific problems? Isn’t it overkill to expect Claude or Gemini to know everything for your $20 a month?
Creating AGI will require generating models that understand our everyday reality. Won’t that lead to self-awareness? And if machines have self-awareness, can we own them? Wouldn’t that be slavery? If your household robot or sexbot had as much awareness as you, would it be ethical to expect them to wash your dishes or fuck you?
Isn’t the drive towards AGI and ASI kind of like playing God? I don’t believe in God, nor do I believe we should become one or create one. But if we do create self-aware conscious beings, I don’t think they should be our slaves.
AI models are benchmarked against an array of tests and skills. Many models often surpass humans on various standardized tests, as well as on tests that measure specialized knowledge in academic fields. Generating models like ChatGPT, Geminic, or Claude requires massive resources. Resources that are straining the economy and infrastructure.
Are these efforts really needed, or is it just ego and greed run amok? Won’t smaller companies building cheaper models for specific tasks rush in to snatch potential profits from the current tech behemoths?
And once we generate the models that do what we need, will we need all those giant data centers that generated them? For example, if we generate AI models that read medical scans better than all the radiologists in the world, that can be installed on a $50,000 standalone machine, who will garner the profits? Will it be OpenAI or Anthropic?
Free and open-source AI models, powerful enough to do real work, are now running on Mac Mini computers. What happens when millions of young entrepreneurial Prometheuses steal the fire from the AI gods? I don’t think they will need AGI to succeed.
Isn’t the race to AGI an insane distraction? Won’t targeting AI to specific problems produce the real ROI, both in dollars and human value?
JWH