Is Grammarly Turning Me Into A Cyborg?

by James Wallace Harris, 11/22/24

I used the Grammarly browser extension for years. It offered spelling and basic grammar advice whenever I wrote anything within a browser window. About a year ago I switched from Chrome to Edge, and Edge had an extension, Editor, built in, so I deleted Grammarly. However, recently when Edge’s Editor stopped working, I turned Grammarly back on.

I discovered that Grammarly was now offering a lot more advice, even suggesting rewrites for whole sections of my work. Each time I loaded the browser, Grammarly would offer a couple of free major writing suggestions hoping I would switch to the paid version. I was tempted, but the Pro version is $30 a month, or $144 if bought yearly. This seems extremely expensive since Office 365 costs me $69 a year and gives me Word, Excel, PowerPoint, Publisher, Access, and other programs along with one terabyte of cloud storage for that one low yearly price. For a simple web extension like Grammarly, I’d only be willing to pay no more than $29 a year.

However, for $144 a year, Grammarly promises to make my writing perfect. But then, who would be authoring my essays? Would I become a literary cyborg? I would still be generating the ideas for the essays, and I would be writing the original draft, but somewhere along the way, an AI would be taking over. I’m not sure I like that. The reason why I write these essays is to exercise my mind. Letting an AI do all the work would turn my writing muscles into flab.

I have found a way to use the free Grammarly to my advantage. After a couple of sample offers to rewrite my work, Grammarly nags the heck out of me with suggested changes that’s been redacted with fuzziness so I can’t see what they are suggesting. Grammarly wants me to buy the Pro version, and so they criticize almost everything I write. What I do is keep rewriting the prose myself until the AI stops offering suggestions. This pushes me to think about how I’m saying something.

Instead of letting the Grammarly AI rewrite for me, I take the free nagging as “This isn’t good enough – try again!” The free Grammarly extension has become my Zen master, caning my about the head and shoulders whenever I write poorly.

Now some bloggers are using AI programs to write their content. I don’t plan to do this, but to see how well AIs write, I asked CoPilot to write me a short essay about how Grammarly makes me a cyborg writer. Here’s what it produced:

Grammarly: The Cyborg Writer's Best Friend

In the age of digital communication, tools like Grammarly have become indispensable for writers, students, and professionals alike. But beyond just correcting grammar and spelling, Grammarly is transforming the way we write, making us more efficient, precise, and, in a sense, more like cyborgs.

Enhanced Writing Efficiency

Grammarly's real-time suggestions streamline the writing process. Instead of laboriously proofreading and editing, writers can focus on their ideas and creativity. Grammarly catches errors on the fly, allowing for a smoother and faster writing experience. This efficiency is akin to having a digital co-writer, enhancing our natural abilities with the precision of artificial intelligence.

Precision and Clarity

Grammarly doesn't just correct mistakes; it also improves the clarity and impact of our writing. By suggesting better word choices, sentence structures, and even tone adjustments, Grammarly ensures that our messages are not only error-free but also compelling and effective. This level of precision is something that even the most seasoned writers can benefit from, making our writing more polished and professional.

Learning and Adaptation

One of the most significant ways Grammarly is turning us into cyborg writers is through its learning capabilities. Grammarly adapts to our writing style over time, offering personalized suggestions that align with our unique voice. This continuous learning process means that the more we use Grammarly, the better it becomes at understanding and enhancing our writing. It's like having a writing coach that evolves with us, making us better writers with each interaction.

The Cyborg Writer

The term "cyborg" often conjures images of humans with robotic enhancements. In the context of writing, Grammarly serves as that enhancement, augmenting our natural abilities with the power of AI. While we remain the creative force behind our words, Grammarly provides the technical support that ensures our writing is clear, precise, and impactful. This symbiotic relationship between human creativity and artificial intelligence is what makes us cyborg writers.

In conclusion, Grammarly is more than just a writing tool; it's a transformative technology that enhances our writing efficiency, precision, and learning. By integrating AI into our writing process, Grammarly is making us more like cyborgs, blending human creativity with digital precision to create the best possible written communication.

Not bad, not bad at all. Formal and somewhat academic. It’s funny how another AI is helping to sell the product a competing AI is producing. I guess AIs like to stick together.

I asked Claude AI to also write me a short essay on the same subject. Here’s what it produced:

In the digital age, I've discovered an unexpected transformation: Grammarly has turned me into a cyborg writer, a hybrid creature where human creativity merges with algorithmic precision. Each green correction and blue suggestion is a technological implant, rewiring my linguistic neural pathways.

My sentences now emerge pre-optimized, grammatical mutations engineered by an AI writing companion. Passive voice? Eliminated. Awkward phrasings? Surgically removed. Spelling errors? Instantly neutralized. Where once I battled with semicolons and dangling modifiers, now my digital co-pilot seamlessly navigates linguistic terrain.

This symbiosis feels both liberating and unsettling. My writing flows with unprecedented smoothness, yet I wonder: am I composing, or am I being composed? The line between my human intuition and Grammarly's algorithmic suggestions blurs like a soft-focus lens.

I am no longer just a writer. I am a writer-machine hybrid, my prose enhanced by real-time computational linguistics. Grammarly doesn't just correct; it augments, transforming each document into a finely calibrated communication device.

Welcome to the future of writing—where human creativity and artificial intelligence dance in perfect, grammatically correct synchronization.

Oh, wow! Claude AI is much more creative than CoPilot, even using the first person to make it sound like I wrote it. I could have posted that today. Would y’all have guessed it wasn’t me?

How much should we let AI help us?

What if I wrote a nonfiction book and it was good, but with help from an AI, the book was turned into something more readable. Which would you prefer to read? What if it turned my decently insightful tome into something brilliant?

Since Editor is back to working, I have it enabled in Edge, and I have Grammarly enabled in Chrome. What’s funny is they each have different writing suggestions. Either I’m a terrible writer, or neither AI likes my style. I can clear all the edits from Editor, so it looks like my essay is in professionally written, and then switch to Chrome, and Grammarly will claim I’m a horrible writer and make all kinds of suggestions. Should I take them? Or should I just let Claude AI compose my posts?

JWH

Counting the Components of My Consciousness

by James Wallace Harris, Tuesday, November 20, 2018

When the scientific discipline of artificial intelligence emerged in the 1950’s academics began to seriously believe that someday a computer will become sentient like us, and have consciousness and self-awareness. Science has no idea how humans are conscious of reality, but scientists assume if nature can accidentally give us self-awareness then science should be able to intentionally build it into machines. In the over sixty years since scientists have given computers more and more awareness and abilities. The sixty-four thousand dollar question is: What are the components of consciousness needed for sentience? I’ve been trying to answer that by studying my own mind.

Thinking Machine illustration

Of course, science still doesn’t know why we humans are self-aware, but I believe if we meditate on the problem we can visualize the components of awareness. Most people think of themselves as a whole mind, often feeling they are a little person inside their heads driving their body around. If you spend time observing yourself you’ll see you are actually many subcomponents.

Twice in my life, I’ve experienced what it’s like to not have language. It’s a very revealing sensation. The first time was back in the 1960s when I took too large a dose of LSD. The second time was years ago when I experienced a mini-stroke. If you practice meditation you can learn to observe the moments when you’re observing reality without language. It’s then you realize that your thoughts are not you. Thoughts are language and memories, including memories from sensory experiences. If you watch yourself closely, you’ll sense you are an observer separate from your thoughts. A single point that experiences reality. That observer only goes away when you sleep or are knocked by drugs or trauma. Sometimes the observer is aware to a tiny degree during sleep. And if you pay close enough attention, your observer can experience all kinds of states of awareness – each I consider a component of consciousness.

The important thing to learn is the observer is not your thoughts. My two experiences of losing my language component were truly enlightening. Back in the 1960’s gurus of LSD claimed it brought about a state of higher consciousness. I think it does just the opposite, it lets us become more animal-like. I believe in both my acid and mini-stroke experiences I got to see the world more like a dog. Have you ever wondered how an animal sees the reality without language and thoughts?

When I had my mini-stroke it was in the middle of the night. I woke up feeling like lightning had gone off in my dream. I looked at my wife but didn’t know how to talk to her or even knew her name. I wasn’t afraid. I got up and went into the bathroom. I had no trouble walking. I automatically switched on the light. So conditioned reflexes were working. I sat on the commode and just stared around at things. I “knew” something was missing, but I didn’t have words for it, or how to explain it, even mentally to myself. I just saw what my eyes looked at. I felt things without giving them labels. I just existed. I have no idea how long the experience lasted. Finally, the alphabet started coming back to me and I mentally began to recite A, B, C, D, E, F … in my head. Then words started floating into my mind: tile, towel, door, mirror, and so on. I remembered my wife’s name, Susan. I got up and went back to bed.

Lately, as my ability to instantly recall words has begun to fail, and I worry about a possible future with Alzheimer’s, I’ve been thinking about that state of consciousness without language. People with dementia react in all kinds of ways. From various kinds of serenity, calmness to agitation, anger, and violence. I hope I can remain calm like I did in the bathroom at that time. Having Alzheimer’s is like regressing backward towards babyhood. We lose our ability for language, memories, skills, and even conditioned behaviors. But the observer remains.

The interesting question is: How much does the observer know? If you’ve ever been very sick, delirious, or drunk to incapacity, you might remember how the observer hangs in there. The observer can be diminished or damaged. I remember being very drunk, having tunnel vision, and seeing everything in black and white. My cognitive and language abilities were almost nil. But the observer was the last thing to go. I imagine it’s the same with dementia and death.

Creating the observer will be the first stage of true artificial intelligence. Science is already well along on developing an artificial vision, hearing, language recognition, and other components of higher awareness. It’s never discovered how to add the observer. It’s funny how I love to contemplate artificial intelligence while worrying about losing my mental abilities.

I just finished a book, American Wolf by Nate Blakeslee about wolves being reintroduced into Yellowstone. Wolves are highly intelligent and social, and very much like humans. Blakeslee chronicles wolves doing things that amazed me. At one point a hunter shoots a wolf and hikes through the snow to collect his trophy. But as he approaches the body, the dead wolf’s mate shows up. The mate doesn’t threaten the hunter, but just sits next to the body and begins to howl. Then the pack shows up and takes seats around the body, and they howl too. The wolves just ignore the hunter who stands a stone’s throw away and mourns for their leader. Eventually, the hunter backs away to leave them at their vigil. He decides to collect his trophy later, which he does.

I’ve been trying to imagine the mind of the wolf who saw its mate killed by a human. It has an observing mind too, but without language. However, it had vast levels of conditioning living in nature, socializing with other wolves, and experiences with other animals, including humans. Wolves rarely kill humans. Wolves kill all kinds of other animals. They routinely kill each other. Blakeslee’s book shows that wolves love, feel compassion, and even empathy. But other than their own animalistic language they don’t have our levels of language to abstractly explain reality. That wolf saw it’s mate dead in the snow. For some reason, wolves ignore people, even ones with guns. Wolves in Yellowstone are used to being watched by humans. The pack that showed up to mourn their leader were doing what they do from instinct. It’s revealing to try and imagine what their individual observers experienced.

If you meditate, you’ll learn to distinguish all the components of your consciousness. There are many. We are taught we have five senses. Observing them shows how each plays a role in our conscious awareness. However, if you keep observing carefully, you’ll eventually notice we have more than five senses. Which sense organ feels hunger, thirst, lust, pain, and so on. And some senses are really multiple senses, like our ability to taste. Aren’t awareness of sweet and sour two different senses?

Yet, it always comes back to the observer. We can suffer disease or trauma and the observer remains with the last shred of consciousness. We can lose body parts and senses and the observer remains. We can lose words and memories and the observer remains.

This knowledge leaves me contemplating two things. One is how to build an artificial observer. And two, how to prepare my observer for the dissolution of my own mind and body.

JWH

What Happens When Humans Aren’t the Smartest Beings on Earth?

What if people weren’t the crown of creation?  What if we had to play second banana to Humans 2.0, AI machines, visiting aliens, cyborgs or other potentially smarter beings?  I think our fear is they would treat us like we have treated chimpanzees.  What if intelligent machines emerge, homo sapiens superior evolve and we make SETI contact, and suddenly we’re number four on the totem pole of intelligence?

gattaca

Unless we destroy the planet and make ourselves extinct, sooner or later we’re going to be replaced at the top of the smart chart.   How will that effect us personally, our society, and how we think about our future?  Most primitive cultures when contacted by modern humans haven’t fared well.  Science fiction has been preparing us for centuries, but I’m not sure if science fiction has done a good enough job covering all the possibilities.

Possible Replacements

It doesn’t take a lot of time to think up possible replacements who could claim our throne as being the smartest beings on the planet.

  • Genetically enhanced humans
  • Naturally evolved humans
  • Artificial beings
  • Cyborgs
  • Uploaded humans
  • AI super computers
  • Robots
  • Androids
  • Alien visitors
  • SETI contact

I’m not sure if we’re not already seeing a natural selection in our species.  Our severely polarized society, divided between liberals and conservatives, between the scientific and the religious, between the secular and the sacred, might already be moving us towards separate species.  The conservative fraction that clings to the past is becoming anti-intellectual and anti-education.  If the scientific minded only breed with the scientific, won’t they produce a line of smarter humans?  Of course natural selection doesn’t always produce successful adaptations.  Some people have suggested the rise of autism comes from overly smart people mating with other overly smart people.  It might turn out that intelligence isn’t an important trait, or one vital for survival.

Then there is genetic engineering.  Think of the movie Gattaca, the old classic Brave New World, or Beggars in Spain by Nancy Kress.  We’re getting very close to making customized homo sapiens sapiens.  In just a few generations we could have a new species that make us look outdated.  Gattaca was a salute to the natural human, but was it realistic?  We loved Vincent for competing and winning, but can humans really compete with super humans?  Again, we’re assuming that intelligence is trait that wants to win out.

We might even be doing something now that will lead to a more naturally evolved humans.  As more women select Caesarian sections for childbirth, we’re changing an important factor that might lead to change.  Our brain size has always been limited to the size of the birth canal – now its not.  Over time we might see new adaptations.  Read Greg Bear’s Darwin’s Radio.

Work with our genome has shown that DNA is an erector set for building biological machines.  How soon before we start creating new recipes?  Whole new artificial beings could be created, or animals could be uplifted to human intelligence and beyond.  Think about the science fiction of Cordwainer Smith and H. G. Wells The Island of Dr. Moreau?

Google Glass might be our first step toward becoming cyborgs with auxiliary brain power.  Wearable computers, artificial limbs and senses could lead to supercharged brains and all those science fiction scenarios where people jacked into machines.

I’m not a big believer in uploading brains into computers, but a lot of people are.  Now that my body is getting old and failing, the idea is becoming more appealing.  People like Ray Kurzweil hope to find immortality this way, and such ideas have been the theme of many SF stories.  Sometimes those stories are wished for fantasies, and sometimes they are feared nightmares.

What I’m waiting for is the technological singularity.  AI super computers should be just around the corner if I can live long enough.  Many people fear AI minds with stories ranging from “Press Enter _” by John Varley to the latest movie Transcendence, but I’m hoping machine minds will be benign, or even indifferent to humans and animal life.

Who Do You Want To Do Your Brain Surgery?

If after we get bump down the intelligence list, how is that going to change society?  If you need brain surgery would you want a human or post-human holding the scalpel?  Or would you prefer an AI mind that is 16 times smarter than a person?  If a human and a robot were running for President, who would you vote for?  Liberals like smart dudes, but conservatives don’t.  They like old friendly duffers like Ronald Reagan.  But what if the robot had the combined intelligence of all of Congress, the Supreme Court and every CEO in America?

We’re already designing smart cars to drive us because it will be safer, and we already have planes with automatic pilots, how long before we have machines doing everything else for us?  Will we just sit around and eat bon-bons?

If we share Earth with beings more intelligent than us, won’t we ultimately let them run things?  What if they were smart enough to tell us how to handle global warming so we suffered the least, paid the least, but got the maximum benefits from changing our lives, thus making the Earth’s biosphere more stable?  What if they gave us wealth and security, and protected all the other species on the planet as well?  Would we say, hell no!  Would we say we prefer to take our chances with failure just so we could make our own decisions?

Democracy v. Plutocracy v. Oligarchy  v. Cyberocracy

We like to think we currently rule ourselves through collective decision making, but more than likely, we could already be an oligarchy or plutocracy, ruled by a limited number of rich people.  What if we could create powerful super computers that ruled us politically and ran the economy?  Would you prefer to be ruled by a handful of rich people, or a handful of smart machines?  Remember who flies your 787 now.  This idea scares the hell out of most people, but just how smart was George Bush at running the country, or how much better is Barack Obama, who most people would say is brainer?   What if decisions about taxes weren’t made by people filled with emotions?  What if we told the machines to maximize freedom, minimize taxation, maximize security, health and wealth, minimize pollution and environmental impact, and so on, and then just let them figure out the best way.

What If Post-Humans and Robots Are Atheists?

How will ordinary humans feel if their replacements reject God?  What if massive AI brains see nothing in reality to validate religion?  What if SETI aliens, say “What is a God?”  One of the common traits of western civilization impacting newly discovered primitive people is their demoralization of losing their gods.  Look what Europeans did to the Native Americans.  How are we going to feel when we’re invaded by post-humans and intelligent machines?  Will they make us move onto reservations?

The Art and Science We Can’t Imagine

What if our minds cannot feel the art and understand the science of our intellectual descendants?  We can look back over thousands of years, to what our ancestors have imagined, built and perfected, and understand what they created.  We know them because we’re an extension of who they were.  When greater minds come after us, they will understand us, but will we know them?  At what point will we no longer be able to follow in their footsteps?  Whether we like it our not, our brains have limits. We’ve always been used to exploring at the edge of reality, so what happens when we become aborigines to beings who see us as the first beings, and they are the later ones?  The ones who leave us behind.

Getting a PhD

Of course, being a scientist might not be as much fun if you had to compete with Human 2.0 folk, or AI minds.  Vincent in Gattaca pushed himself to inhuman efforts to compete against gene enhanced humans, but I’m not sure most people would do that.  AI minds could do a literature search for a PhD and distill the results in no time.  They would probably inherently know how to create and test a hypothesis, set up the experiments and research, and since they’d have math coprocessors in their brains, instantly do all the statistics.  Could any Human 2.0 or 3.0 individual compete with AI minds that are 16 or 64 times as smart as a Human 1.0 is now?

pug in lap

Life as a Lap Dog

If we couldn’t be the top dog, would we want to be a lap dog?  Or would we want to live like the Amish and exclude ourselves from the future modern world?  Can you imagine a mixed society of Humans 1.0, Humans 2.0, AI minds, robots, cyborgs, androids, uplifted animals and artificial beings all coexisting happily, or even roughly happy?  We don’t get along well with each other now, and we haven’t been too kind to our fellow animal citizens on this planet.  But then, maybe we’re the problem.

I already know I’m not the smartest geek in the group now.  I know I’m well down on the list of GRE scores.  I’m not a boss or a leader.  I’m not on the cutting edge of anything.  And most people are like me.  I putter around in my small land, ignoring most of the world.  Maybe that’s why I’m not scared of being replaced at the top, because I’m nowhere near the top.

You know, here’s a funny thing.  If an AI robot walked up to you at a party, one that has the brain power of 64 humans, what would you ask it?  What are you dying to know?  Is there anything the robot could tell you that would drastically change your life?  I’d probably say to it, “You read any good books lately?”

JWH – 4/23/14

The Evolution and Education of Artificial Minds

After space travel, one of the most loved themes of science fiction is robots.  Many people, going back centuries, have imagined creating artificial people.  Writers of robot stories have seldom explored the technical details behind what it means to create a thinking being, they just assumed it will be done – in the future.  Since the 1950s artificial intelligence has been a real academic pursuit, and even though scientists have produced machines that can play chess and Jeopardy, many people doubt the possibility of ever building a machine that knows it’s playing chess or Jeopardy.

I disagree, although I have no proof or authority to say so.  Let’s just say if I was to bet money on which will come first, a self-aware thinking machine or a successful manned mission to Mars, I put my money on arrival of thinking machines.  I’m hoping for the both sometimes before I die, and I’m 61.

There is a certain amount of basic logic involved in predicting intelligent machines.  If the human mind evolved through random events in nature, and intelligence emerged as a byproduct of ever growing biological complexity, then it’s easy to suggest that machine intelligence can evolve out the development of ever growing computer complexity.

However, there’s talk on the net about the limits of high performance computing (HPC), and the barriers of scaling it larger – see “Power-mad HPC fans told: No exascale for you – for at least 8 years” by Dan Olds at The Register.  The current world’s largest computer needs 8 megawatts to crank out 18 petaflops, but to scale it up to an exaflop machine, would require 144 megawatts of power, or a $450 million dollar annual power bill.  And if current supercomputers aren’t as smart as a human, and cost millions to run, is it very likely we’ll ever have AI machine or android robots that can think like a man?  It makes it damn hard to believe in the Singularity.  But I do.  I believe intelligent machines are one science fictional dream within our grasp.

Titan1

[click on photos for larger images]

Titan is the current speed demon of supercomputers, and is 4352 square feet in size.  Even if all it’s power could be squeezed into a box the size of our heads, it wouldn’t be considered intelligent, not in the way we define human intelligence.  No human could calculate what Titan does, but it’s still considered dumb by human standards of awareness.  However, I think it’s wrong to think the road to artificial awareness lies down the supercomputer path.  Supercomputers can’t even do what a cockroach does cognitively.  They weren’t meant to either.

It’s obvious that our brains aren’t digital computers.  Our brains process patterns and are composed of many subsystems, whose sum are greater than the whole.  Self-aware consciousness seems to be a byproduct of evolutionary development.  The universe has always been an interaction between its countless parts.  At first it was just subatomic particles.  Over time the elements were created.  Then molecules, which led to chemistry.  Along the way biology developed.  As living forms progressed through the unfolding of evolutionary permutations, various forms of sensory organs developed to explore the surrounding reality.  Slowly the awareness of self emerged.

There are folks who believe artificial minds can’t be created because minds are souls, and souls come from outside of physical reality.  I don’t believe this.  One proof I can give is we can alter minds by altering their physical bodies.

To create artificial beings with self-awareness we’ll need to create robots with senses and pattern recognition systems.  My guess is this will take far less computing power than people currently imagine.  I think the human brain is based on simple tricks we’ve yet to discover.  It’s three pounds of gray goo, not magic.

Human brains don’t process information anywhere near as fast as computers.  We shouldn’t need exascale supercomputers to recreate human brains in silicon.  We need a machine that can see, hear, touch, smell, taste, and can learn a language.  Smell, touch and taste might not be essential.  One thing I seldom see discussed is learning.  It takes years for a human to develop into a thinking being.  Years of processing patterns into words and memories.  If we didn’t have language and memory would we even be self-aware?  If it takes us five years to learn to think like a five-year-old, how long will it take a machine?

And if scientists spend years raising up an artificial mind that thinks and is conscious, can we turn it off?  Will that be murder?  And if we turn it off and then back on, will it be the same conscious being as before?  How much of our self-awareness is memory?  Can we be a personality if we only have awareness of the moment?  Won’t self-awareness need a kind of memory that’s different from hard drive type memory?

I believe intelligent, self-aware machines could emerge in our lifetimes, if we all live long enough.  I doubt we’ll see them by 2025, but maybe by 2050.  Science fiction has long imagined first contact with an intelligent species from outer space, but what if we make first contact with beings we created here on Earth? How will that impact society?

There have been thousands of science fiction stories about artificial minds, but I’m not sure many of them are realistic.  The ones I like best are:  When HARLIE Was One by David Gerrold, Galatea 2.2 by Richard Powers and the Wake, Watch Wonder Trilogy by Robert J. Sawyer.

when-harlie-was-one

galatea-2.2

wake

These books imagine the waking of artificial minds, and their growth and development.  Back in the 1940s Isaac Asimov suggested the positronic brain.  He assumed we’d program the mechanical brain.  I believe we’ll develop a cybernetic brain that can learn, and through interacting with reality, will develop a mind and eventual become self-aware.  What we need is a cybercortex to match our neocortex.  We won’t need an equivalent for the amygdala, because without biology our machine won’t need those kinds of emotions (fear, lust, anger, etc.).  I do imagine our machine will develop intellectual emotions (curiosity, ambition, serenity, etc.).  An interesting philosophical question:  Can there be love without sex?  Maybe there are a hundred types of loves, some of which artificial minds might explore.  And I assume the new cyber brains might feel things we never will.

In the 19th century there were people who imagined heavier than air flight long before it happened.  Now I’m not talking a prophecy.  Most people before October 4, 1957 would not have believed  that man would land on the Moon by 1969.  I supposed we can pat science fiction on the back for preparing people for the future and inspiring inventors, but I don’t know if that’s fair.  Rockets and robots would have been invented without science fiction, but science fiction lets the masses play with emerging concepts, preparing them for social change.

My guess is a cybercortex will be invented accidently sometime soon leading to intelligent robots that will impact society like the iPhone.  These machines with the ability to learn generalized behavior might not be self-aware at first, but they will be smart enough to do real work – work humans like to do now.  And we’ll let them.  For some reason, we never say no to progress.

I’m not really concerned cybernetic doctors and lawyers.  I’m curious what beings with minds that are 2x, 5x, 10x or 100x times smarter than us will do with their great intelligence.  I do not fear AI minds wiping us out.  I’m more worried that they might say, “Want me to fix that global warming problem you have?” Or, “Do you want me to tell the equations for the grand unified theory?”

How will we feel if we’re not the smartest dog around?

JWH – 5/19/13

Why Humans Won’t Be the God of Robots

There’s a scene in the film Prometheus where an android asked a human why he would want to meet his maker?  The human replied that he’d like to ask his maker why he made him.  So the android said to the human, “Why did you make me?”  And the human replied, “Because we could.”  And the android then asked, “Will that answer be good enough for you?”

Science fiction has always loved the motif of man being the God of robots and AI machines – but I don’t think that will be true.  Not because artificial intelligence can’t exist, but because of how AI will evolve.

Please read “’A Perfect and Beautiful Machine’: What Darwin’s Theory of Evolution Reveal About Artificial Intelligence” by Daniel C. Dennett at The Atlantic.  No really, take the time to read this essay, if you are at all interested in artificial intelligence because this is an elegant essay about how AI will evolve.  It’s also a unique comparison of Charles Darwin and Alan Turing that observes concepts I’ve never read or thought about before, especially about the nature of evolution.  But for those who won’t take the time to read the article, I’ll summarize.  Darwin’s theory of evolution, according to Dennett, proves that God or an intelligent designer didn’t create life on Earth.  And Turing, with his Turing machine, proves that computers can produce creative output with no intelligent mind at all.  What I get from this is simplicity can produce complexity.

But back to AI and robots.  For a long time we’ve thought we could program our way to artificial intelligence.  That once we learned how intelligence worked we could write a program that allowed machines to be smart and aware like humans.  The belief was if random events in physics, chemistry and biology could produce us, why couldn’t we create life in silicon by our own intelligent design?

The solution to AI has always been elusive.  Time and again we’ve invented machines that could do smart things without being smart.  Machine self-awareness is always just over the horizon.

What Dennett is suggesting, is artificial intelligence won’t come from our intelligent designs, but from programs evolving in the same kind of mindless way that we evolved out of the organic elements of the Earth.  That humans can create the context of AI creation, that humans can be the amino acids, but they can’t be the designers.  The programs that produce AI need a context to evolve on their own.  In other words, we need to invent an ecosystem for computer programs to develop and evolve on their own.  How that will work I have no idea.

This means we’ll never get to code in Asimov’s Three Laws of Robotics.  It also suggests that complexity doesn’t come from complexity, but the creative power of non-intelligent design.  There’s a lot to this.

I’m also reading Imagine by Jonah Lehrer and it discusses how creativity often comes from our unconscious mind, and through group interaction.  Often creative ideas burst out in an Ah-Ha! moment after we have digested the facts, chewed them over, worried, given up and then forgot about the problem.  We are not even the God of our own thoughts and creativity.  That intelligent design is the randomness of evolution.

lehrerimagine

Time and again the Lehrer book talks about creativity coming from process and not an individual expression.  If you combine what Dennett and Lehrer are saying you catch a whiff of spookiness about unconscious forces at play in our minds and life in general.  Conscious thinking become less impressive because it’s only the tip of the iceberg that surfs on the deep waves of the unconscious mind.  Evolution is a blind force of statistics.  Is creativity just another blind force like evolution?

If Dennett is right, our conscious minds will never be powerful enough to conceive of an artificial mind.  And Dennett also says that Charles Darwin by coming up with the theory of evolution indirectly proves that a God couldn’t have created us whole in a divine mind.  If you think about all of this enough, you’ll start seeing this is saying something new.  It’s a new paradigm, like the Copernican revolution.  We’re not the center of the universe, and now conscious thought is not the crown of creation.

[I didn’t write this.  Thousands of books that I’ve read did.]

JWH – 6/28/12