Is Grammarly Turning Me Into A Cyborg?

by James Wallace Harris, 11/22/24

I used the Grammarly browser extension for years. It offered spelling and basic grammar advice whenever I wrote anything within a browser window. About a year ago I switched from Chrome to Edge, and Edge had an extension, Editor, built in, so I deleted Grammarly. However, recently when Edge’s Editor stopped working, I turned Grammarly back on.

I discovered that Grammarly was now offering a lot more advice, even suggesting rewrites for whole sections of my work. Each time I loaded the browser, Grammarly would offer a couple of free major writing suggestions hoping I would switch to the paid version. I was tempted, but the Pro version is $30 a month, or $144 if bought yearly. This seems extremely expensive since Office 365 costs me $69 a year and gives me Word, Excel, PowerPoint, Publisher, Access, and other programs along with one terabyte of cloud storage for that one low yearly price. For a simple web extension like Grammarly, I’d only be willing to pay no more than $29 a year.

However, for $144 a year, Grammarly promises to make my writing perfect. But then, who would be authoring my essays? Would I become a literary cyborg? I would still be generating the ideas for the essays, and I would be writing the original draft, but somewhere along the way, an AI would be taking over. I’m not sure I like that. The reason why I write these essays is to exercise my mind. Letting an AI do all the work would turn my writing muscles into flab.

I have found a way to use the free Grammarly to my advantage. After a couple of sample offers to rewrite my work, Grammarly nags the heck out of me with suggested changes that’s been redacted with fuzziness so I can’t see what they are suggesting. Grammarly wants me to buy the Pro version, and so they criticize almost everything I write. What I do is keep rewriting the prose myself until the AI stops offering suggestions. This pushes me to think about how I’m saying something.

Instead of letting the Grammarly AI rewrite for me, I take the free nagging as “This isn’t good enough – try again!” The free Grammarly extension has become my Zen master, caning my about the head and shoulders whenever I write poorly.

Now some bloggers are using AI programs to write their content. I don’t plan to do this, but to see how well AIs write, I asked CoPilot to write me a short essay about how Grammarly makes me a cyborg writer. Here’s what it produced:

Grammarly: The Cyborg Writer's Best Friend

In the age of digital communication, tools like Grammarly have become indispensable for writers, students, and professionals alike. But beyond just correcting grammar and spelling, Grammarly is transforming the way we write, making us more efficient, precise, and, in a sense, more like cyborgs.

Enhanced Writing Efficiency

Grammarly's real-time suggestions streamline the writing process. Instead of laboriously proofreading and editing, writers can focus on their ideas and creativity. Grammarly catches errors on the fly, allowing for a smoother and faster writing experience. This efficiency is akin to having a digital co-writer, enhancing our natural abilities with the precision of artificial intelligence.

Precision and Clarity

Grammarly doesn't just correct mistakes; it also improves the clarity and impact of our writing. By suggesting better word choices, sentence structures, and even tone adjustments, Grammarly ensures that our messages are not only error-free but also compelling and effective. This level of precision is something that even the most seasoned writers can benefit from, making our writing more polished and professional.

Learning and Adaptation

One of the most significant ways Grammarly is turning us into cyborg writers is through its learning capabilities. Grammarly adapts to our writing style over time, offering personalized suggestions that align with our unique voice. This continuous learning process means that the more we use Grammarly, the better it becomes at understanding and enhancing our writing. It's like having a writing coach that evolves with us, making us better writers with each interaction.

The Cyborg Writer

The term "cyborg" often conjures images of humans with robotic enhancements. In the context of writing, Grammarly serves as that enhancement, augmenting our natural abilities with the power of AI. While we remain the creative force behind our words, Grammarly provides the technical support that ensures our writing is clear, precise, and impactful. This symbiotic relationship between human creativity and artificial intelligence is what makes us cyborg writers.

In conclusion, Grammarly is more than just a writing tool; it's a transformative technology that enhances our writing efficiency, precision, and learning. By integrating AI into our writing process, Grammarly is making us more like cyborgs, blending human creativity with digital precision to create the best possible written communication.

Not bad, not bad at all. Formal and somewhat academic. It’s funny how another AI is helping to sell the product a competing AI is producing. I guess AIs like to stick together.

I asked Claude AI to also write me a short essay on the same subject. Here’s what it produced:

In the digital age, I've discovered an unexpected transformation: Grammarly has turned me into a cyborg writer, a hybrid creature where human creativity merges with algorithmic precision. Each green correction and blue suggestion is a technological implant, rewiring my linguistic neural pathways.

My sentences now emerge pre-optimized, grammatical mutations engineered by an AI writing companion. Passive voice? Eliminated. Awkward phrasings? Surgically removed. Spelling errors? Instantly neutralized. Where once I battled with semicolons and dangling modifiers, now my digital co-pilot seamlessly navigates linguistic terrain.

This symbiosis feels both liberating and unsettling. My writing flows with unprecedented smoothness, yet I wonder: am I composing, or am I being composed? The line between my human intuition and Grammarly's algorithmic suggestions blurs like a soft-focus lens.

I am no longer just a writer. I am a writer-machine hybrid, my prose enhanced by real-time computational linguistics. Grammarly doesn't just correct; it augments, transforming each document into a finely calibrated communication device.

Welcome to the future of writing—where human creativity and artificial intelligence dance in perfect, grammatically correct synchronization.

Oh, wow! Claude AI is much more creative than CoPilot, even using the first person to make it sound like I wrote it. I could have posted that today. Would y’all have guessed it wasn’t me?

How much should we let AI help us?

What if I wrote a nonfiction book and it was good, but with help from an AI, the book was turned into something more readable. Which would you prefer to read? What if it turned my decently insightful tome into something brilliant?

Since Editor is back to working, I have it enabled in Edge, and I have Grammarly enabled in Chrome. What’s funny is they each have different writing suggestions. Either I’m a terrible writer, or neither AI likes my style. I can clear all the edits from Editor, so it looks like my essay is in professionally written, and then switch to Chrome, and Grammarly will claim I’m a horrible writer and make all kinds of suggestions. Should I take them? Or should I just let Claude AI compose my posts?

JWH

Transcendence–Why Is This Film Only Getting 19% at Rotten Tomatoes?

I went to Transcendence thinking I’d hate this film because of all the bad reviews it’s getting, but to my surprised I ended up enjoying it way more than I imagined.  I went with two friends – Laurie walked out, and Ann said she liked it so much she wanted to see it again.  I thought Transcendence had some big problems, but overall it was a nice exploration of the idea of brain uploading.  Coincidentally, I’m listening to Accelerando by Charles Stross this week, and the science fictional ideas in the book overlapped nicely with those of the film.  Maybe I enjoyed the film merely because it was more fuel for the ideas I’m entertaining at the moment.

If you read the reviews I do concur that the film is lackluster in action, that most of the acting was subdued, and the plotting is clunky, but it just didn’t seem that bad, not a 19% rating at Rotten Tomatoes.  If you compare Transcendence to the dazzling Her, another movie about evolving computer intelligence, yes, this film is slight, but is it that bad?   I’ve seen films I thought were much worse get much higher scores at RT.

I have a hypothesis to test.  Does the acceptance or rejection of science fictional ideas in movies prejudice critics and fans opinions of a science fiction movie?  So if a movie explores an idea you hate, you reject the whole movie?

I wondered, if Transcendence presents ideas that people don’t like?  To talk about those ideas, I’m probably going to reveal some plot points, but many of these are in the previews.  The movie is about three AI scientists, one of which, the husband of the couple played by Johnny Depp, is shot by anti-AI terrorists and his wife saves him by uploading his mind into a computer.  Uploading also happens in Accelerando, and like that book, they also cover super technology brought about by post-human minds.  The book covers vast stretches of time, but in the film, all the advance technology comes out in two years.  This scares the regular folk in the flick, who feel they must destroy the Frankenstein AI.

Are movie goers tired of films about sentient computers?  Do they find post-human life offensive?  Are the networked humans too much like zombies to them?  Is nanotechnology just too scary to think about?  Or, was the ideas in the story fine, and they just didn’t like the writing, presentation, acting or settings?

TRANSCENDENCE

Science fiction books and movies have a long heritage of tales about intelligent computers.  Sometimes they are evil (Colossus), and sometimes they are fun (Short Circuit).  In Transcendence, it’s ambiguous.  Is that the problem?  Uploading minds is not as common, but there’s plenty of precedent (The Matrix).

I’m a little tired of science fiction being about saving the world.  Why does science fiction always have to involve a big threat to all of humanity?  There was no need to involve guns or violence in this story.  Gattaca was the perfect science fiction movie to me.  It was a personal story.  Ditto for Her and Robot and Frank.  Can’t we have a story about a super intelligent being without involve armies and terrorists?  Or maybe critics and audiences didn’t like this movie because there wasn’t enough action and explosions.

Or was the film disliked because it suggests that ordinary people will be obsolete?  What’s weird is movie goers love mutants in superhero comic book stories, but they don’t seem to like post-humans.  A human that can fly is fine, but one that makes us look past our due date is not?

Audiences are more forgiving than the critics at Rotten Tomatoes, and the audience response at RT was 47% for Transcendence.   That’s pretty low for audiences.  Maybe I should just accept that this film was a dog, and maybe I liked it because it was about some of my pet topics.  That does fit in with my hypothesis – I liked it for its ideas, and others hated it for the same ideas.   I really hated Marvel’s The Avengers, which got a 92% critics/91% audience rating at RT, and I disliked the movie intensely because of its ideas.

I wonder if movie makers could save a lot of money on special effects if they merely created science fiction movies with extremely popular science fictional ideas?

JWH – 4/22/14

The Origins of Higher Intelligence in Science Fiction

In the physical realm of reality, we humans have always believed we were the crown of creation, the smartest beings in a long chain of animal and plant creatures.  Yet, as far back into history that we have memories, we have speculated about metaphysical beings that were far smarter than us, who had amazing abilities.  God and gods, angels and devils, and a whole zoology of spiritual beings.  Over the centuries of progress and the development of science, we have come to doubt the existence of such metaphysical beings.  We’ve even asked ourselves, are we alone in the universe, and wondered if there are beings elsewhere in the vast multiverse that are as smart as we are, or even conscious of living in the multiverse.

Ancient Greeks speculated about life on distant worlds.  They even imagined the universe composed of atoms and concluded our world must not be unique.  Ever since then there have been people who thought about life on other worlds, or even the creation of better humans, or even the wilder ideas of the creation of smart machines and artificial life.  We just don’t want to accept that we’re alone.

For most of history, most of humanity has assumed we’re not alone, that spiritual beings existed and they were superior to us.  After the Renaissance and the Enlightenment, religious thinking decline and scientific thinking rose.  Among the population, a growing number of us has come to accept that physical reality is the only reality.  Instead of waiting for God to give us higher powers after we died, we started speculating on how we could give ourselves immortality, greater wisdom, and control over space and time, and we wondered more and more if there are other intelligent, self-aware creatures living in the universe with us.  Slowly, a form of literature developed to support this speculation and it’s generally called science fiction.

mary_shelley

In 1818 Mary Shelley spread the idea in her book Frankenstein; or, The Modern Prometheus that we could find the force that animated life and overcome death.  That’s a very apt subtitle because Prometheus was a god that uplifted mankind.  Mary Shelley also promoted another great science fictional concept in her 1826 book The Last Man, which speculated that our species could go extinct.  If there is no God we must protect ourselves from extinction, and fight against death.  But actually, we wanted more.

HG-Wells 

Then in 1895 H. G. Wells suggested to the world in his novel The Time Machine, that humanity could even devolve, as well as go extinct.  Not only that, he showed how the Earth could die.  This was all inspired from On the Origin of Species by Means of Natural Selection by Charles Darwin, which first appeared in 1859.  In The Time Machine Wells imagined life in 802,701 A.D.  Instead of picturing the obvious, a superior race of humans evolving, he envisions two species that had branched off from ours, and neither of which were superior.  He hints that there were greater versions of humanity in between the times, but now we had devolved.  At the end of the book, he suggests that humans devolve even further into mere creatures without any intelligence.  This powerful speculative fiction defined the scope of humanity for us.  We can become greater, lesser or ceased to exist.

Then in 1898 Wells gets the world to think about another brilliant science fictional idea, what if there are superior alien beings that can visit Earth and conquer us.  In The War of the Worlds, intelligent beings come to exterminate humans.  We don’t know how much more intelligent they are, but the Martians can build great machines and travel across space.  There had been other books about alien invaders and time travel, but H. G. Wells made these ideas common speculation.

John_Davys_Beresford 

In 1911, J. D. Beresford published The Hampdenshire Wonder, a book about deformed child with a super powerful brain, a prodigy or wunderkind of amazing abilities.  Beresford, his novel and ideas, were never as famous as his contemporary H. G. Wells, but The Wonder was an idea whose time had come.  How much smarter could a human become?  Readers of science fiction, and some people in the world at large were now wondering about the powers of the mind, as well as speculating about how powerful could alien minds be.  Stories about robots had existed before now, as well as Frankenstein and Golem like creatures, but the public had not fixated on the idea of superior machine intelligence or artificial life.   But we were on our way to imagining a superior man, a superman.

Prodigies were well known and speculated about, like musical prodigies and math geniuses, but Beresford suggests the human mind had a lot more potential in it.  He also zooms in on the resentment factor.

Gladiator_(novel)

The 1930 novel Gladiator by Philip Wylie suggests it’s possible to enhance humans with a serum to improve their physical strength.  There is no scientific reasoning behind this, other than to suggest we could have the equivalent weight lifting power of an ant, and the jumping power of a grasshopper.  All of this merely foreshadows Superman comics (1932).  The theory behind Superman is he’s an alien with advanced powers and not an enhanced humans.

Action_Comics

Comic books embraced the idea of super-heroes speculating about an endless variety of ways to get humans with more features and powers.  Comics have never been very scientific, and instead copied the ideas and themes from ancient gods and goddesses.  It’s all wish-fulfillment fantasies.  People are exposed to radiation and lightening all the time and don’t mutate.  However, all of this led to speculation about what humans could become, and how evolution might produce Homo Sapiens 2.0.

Odd_John_first_edition_cover

In 1935 Olaf Stapledon published Odd John: A Story Between Jest and Earnest.  Stapledon went far beyond comics in his speculation about what a superman might be, how they would act, and how society would react.  Science fiction is now seriously philosophizing about the future and potential of the human race.

Olaf_Stapledon

Olaf Stapledon was a far reaching thinker and a serious science fiction writer.  Last and First Men (1930) describes eighteen species of humans, while Star Maker (1937) tried to write a history of life in the universe.  These books are not typical novels, but more like fictional narratives.  The scope of Stapledon’s speculation was tremendous, and few science fiction writers have tempted to best him.

Gray_lensman

1930s science fiction was full of stories about accelerated evolution, such as “The Man Who Evolved” by Edmond Hamilton, and this culminated in the super-science stories of E. E. Smith and his Lensmen stories (1934-1948).   Science fiction fans ate this stuff up, and many people consider the ideas in the Lensmen series as inspiration for the Star Wars series.  The stories involve two super alien races fighting a galactic war over vast time scales using client races that they uplifted with knowledge and superior technology.

Smith ideas weren’t completely new, but he put them together in an exciting series that really jump-started the science fiction genre of the 1940s and 1950s.  Smith presented the idea that aliens could be godlike or devilish in their abilities, wisdom and knowledge, and they could bestow great powers on those who follow them.  The science behind all of this is hogwash.  It’s pandering to ancient religious beliefs by presenting the same ideas in pseudo-science costumes.

Star Wars has the same exact appeal.  Humans, especially adolescent boys, and now liberated girls, want power and adventure.  These powers and adventures are no different from what Greek, Roman, Hindu and Norse gods experienced.  The excitement of Golden Age Science Fiction from the pulp magazines of the 1940s and into the psycho-social science fiction of the 1950s represents the unleashing of great desires.  Desires for immortality, of ruling the heavens, telepathy, telekinesis, teleportation, flying faster than light, becoming as all-knowing as God.

Starting in the 1950s, especially with movies, and expanding in the 1960s with television shows like Star Trek, these ideas became widely popular, almost universal, and during the next 50 years, they came to dominate the most popular films.  There is a huge pent-up desire here for the fantastic and the transcendental through the powers of science.

Science fiction writers have often faced the challenge of presenting a super-advanced being, either a very evolved human, a powerful alien, or an AI being with vast intelligence as a character in their stories.  Generally, the assumption is super-intelligence equals ESP like powers.  How often in Outer Limits, Star Trek, Star Wars, or in written science fiction, have you seen a highly evolved human read minds, or move matter with thought, such as Valentine Michael Smith in Stranger in a Strange Land?  This goes way back in science fiction.  The same thing is true when aliens come down to visit in their flying saucers.  If they are presented as from an ancient civilization, they might not even have bodies, but they can manipulated space and time at will.

Isn’t that all silly? How does higher IQ equal overcoming the physical laws of reality?

stranger_in_a_strange_land_cover

Back in 1961 Robert Heinlein suggests that a very ancient race of Martians had conquered space and time with their minds, and they taught their techniques to a normal human, as if it was no more difficult than learning yoga.  Really?  Is that believable?  Well, science fiction fans ate this up too.  And then in 1977 Star Wars suggested similar powers for the Jedi.  Why do people want to believe thinking can be that powerful?  Well obviously, they hoped to have such power.

Valentine Michael Smith could make objects move or disappear.  He could kill people at will by sending them into another dimension.  He also had fakir like control over his body that allowed him to hibernate and appear dead.  He could also talk with ghosts.  Heinlein gives us no reason how these wild talents developed, or how they could function within the rules of physics.

Like Luke Skywalker learning to use The Force, people hope to transcend their old way of being through will power.  So far we haven’t had much luck with that concept.  The next step is to invent machines that could enhance us.

lthesixthfinger

In 1963, one of the classic episodes of the original Outer Limits has David McCallum, an ordinary miner, put into a mad scientist’s chamber and his body evolved with speeded up evolution.  McCallum’s brain gets huge, he grows a sixth finger on each hand and his mental powers become enormous.  This superman moves beyond love and hate and sees normal humans beneath his consideration.

This is a step beyond Heinlein.  It suggests that evolution will eventually produce a smarter human.  It gives us no reason why we should believe this.  One real theory about why humans actually evolved was to adapt to climate change so we could survive in many different environments and climates.  Humanity  has faced all kinds of challenges and we’ve yet to morph into anything new yet.

more-than-human

Back in 1953, Theodore Sturgeon proposed that mutations might exist in the population, in the case of More Than Human, suggested that six such individuals getting together to blend their talents into a gestalt consciousness.  The first part of the story is called “The Fabulous Idiot” and reminds us we’ve long known about idiot savants that have wild talents.  We have to give Sturgeon credit for sticking close to reality and not just making up some science fiction mumbo-jumbo, except that he suggests that misfits have ESP or telepathy, that darling concept of 1950s science fiction writers.  Without telepathy we can’t create the gestalt. 

There are humans with magnificent mental abilities, with photographic minds, wizards with numbers and math, but most of them have other weaknesses that keep them from being fully functional as social beings.  There seems to be a problem with the human mind focusing to closely on any one talent at the expense of general abilities.

Chocky

John Wyndham comes up with a solution of having an alien intelligence inhabit a boy.  This is sort of a cheat don’t you think?  Without explaining  how an alien mind can occupy our mind and why it’s mind is superior, this is no more than waving a wand and saying, let it be so.

Gary_Mitchell_After

Star Trek explores accidently accelerated evolution when the Enterprise hits a magnetic storm on the edge of the galaxy and crewman Gary Mitchell develops godlike psionic powers.  Like many stories about evolved beings, Gary becomes a threat to the normal people and feels no moral restraint about killing people.  Heinlein presented Valentine Michael Smith as being just with his use of powers to disappear people, but Captain Kirk and Mr. Spock see Gary Mitchell as evil and must be destroyed. 

This show was a second pilot for the original Star Trek series and Mr. Spock is very aggressive, brandishing a rather large and powerful phaser rifle.  Later on Mr. Spock becomes the ideal of mental self-control and evolved being, but then he’s a Vulcan.  The implication is control over feelings will lead to greater mental powers.

heinleinra-themoonisaharshmistress-berkley-001-500

In 1965 Heinlein returns with a newer version of Mike from Stranger in a Strange Land.  Once again, this Mike is an innocent, but a machine coming into consciousness.  Once again he has to learn about how the world works and to develop his own talents.  Being a machine he has new abilities that humans don’t and can’t have.  Now we’re onto something.  If we can’t evolve our brains, why not use our brains to build a better brain.  Mike is a friendly computer, but many people fear this idea.

colossus

Just a year later, in 1966, D. J. Jones images the world controlled by two giant military computers.  Of course, in 1983 the film War Games imagines another dangerous military computer with consciousness.  This happens quite often in science fiction, uppity computers that must be outwitted by slower minded humans.  We seldom get to explore the potential of a smart computer.

when-harlie-was-one

David Gerrold actually writes a science fiction novel that thoughtfully explores the idea of an emerging machine intelligence in 1972, and even speculates on many interesting ideas that eventually become part of the computer age, including computer viruses.  Gerrold builds on what Heinlein started with The Moon is a Harsh Mistress.

galatea-2.2

In 1995, literary writer Richard Powers explores the idea of machine intelligence with Galatea 2.2, where scientists build a computer named Helen to understand English literature.

wake

Then in 2009, Robert J. Sawyer began a trilogy about an emerging AI that evolves out of the Internet.  Webmind, as it names itself.  Webmind works hard not to be threatening and wants to help humanity.

Let’s imagine a Homo Sapiens 2.0, or BEM, or AI with an IQ of 1,000.  I don’t know if that’s appropriate for the actual scale, but the highest IQ recorded are just over 200, so lets use 1,000 as a theoretical marker.  Let’s imagine IBM’s Watson that had all that brainpower and more, so he/she was like a human with computer thinking speed and memory.

What would it mean to have an IQ of 1,000.  It would mean the AI, Alien or Homo Sapiens 2.0 would think very very fast, remember incredibly well, and solve brain teasers faster than anyone on Earth.  It wouldn’t mean it could read minds or move matter at will, although I’d expect it to deduce information about people like Sherlock Holmes.

Probably all math and physics would be a snap to such a being.  In fact, it would think so fast and know so much that it might not find much of interest in reality.  It wouldn’t know everything, but lets imagine it could consciously imagine calculations like those made in supercomputers to predict the weather, solve subatomic particle experiments or run the Wall Street Stock exchange.

What would such a being feel?  How would it occupy its mind with creative pursuits?

We feel as humans at the crown of creation, that intelligence is the grand purpose of the universe, but when you start studying the multiverse, that might not be so.  We’re just one of an infinity of creations.  There might be limits to intelligence, like physical limits in the universe, like the speed of light.

Science fiction hasn’t begun to explore the possibilities of higher intelligence, but I do think there are limits of awareness, limits of thought and limits of intelligence.  All too often science fiction has taken the easy way out and assumed higher intelligence equals godlike powers.  What does it truly mean to know about every sparrow that falls from a tree?  Is that possible?

Computers are teaching us a lot about intelligence.  Up till now they show that brilliance is possible without awareness.

Science fiction has explored the nature of alien minds, machine minds and evolved human minds over and over, yet these explorations have come up with very little of substance.  I often wonder if the universe doesn’t appear simple with only a moderate amount of intelligence, education and self-awareness.  If we could couple the mind of a human with IBM’s Watson, the resulting mind might be smart enough to fully comprehend reality and build almost anything that needs to be built or invented.  Such a being would know if it’s worth the effort to travel to the stars, or just sit and watch existence as it is.

JWH – 5/15/12

The Implications of Watson

Watson, the supercomputer contestant on Jeopardy this week represents a stunning achievement in computer programming.  People not familiar with computers, programming and natural language processing will have no clue to how impressive Watson’s performance is, but it has far reaching implications.  Jeopardy is the perfect challenge for demonstrating the machine’s ability to process English.  The game requires the understanding allusions, puns, puzzles. alliterations – almost every kind of word play.  This might look like a smart gimmick to get IBM publicity, but it’s so much more.

Computers can process information if its formatted and carefully structured – but most of the world’s knowledge is outside the range of a SQL query.  Watson is a machine designed to take in information like we do, through natural language.  When it succeeds it will be a more magnificent achievement than landing men on the moon.

While I was watching the intro to the second day show and listening to the designers of Watson I felt rather humbled by my puny knowledge of computers.  I felt like a dog looking up at my master.   Most people like to think they are smart and intelligent, but when they meet people with brains that far exceed their own minds it’s troublesome.  A great novel about this is Empire Star by Samuel R. Delany.  It’s about a young poet who thinks he’s having original experiences until he meets an older poet who has already done everything the younger man has.

How will we feel when the world is full of Watsons and they are the intellectual giants and we’re the lab rats?  IBM built Watson to data mine natural language repositories – think libraries, the Internet, or NSA spying.  The descendants of Watson will be able to write papers that leave human PhD candidates in the dust.  One of the Watson designers said they built Watson to handle information overload.  Of course he assumed Watson would be a tool like a hammer and humans would be in control – but will it always be that way?

Watson cannot see or hear, but there are other AI researchers working on those problems.  We’re very close to having machines like those in The Moon is a Harsh Mistress or When H.A.R.L.I.E. Was One or Galatea 2.2.  Right now Watson is way too big to put into a robot body so he will live immobile like HAL and WOPR, but that will change too.

Real life has seldom caught up with the wild imaginations of science fiction.  I had hoped manned exploration of the solar system would have happened in my lifetime but that is not meant to be.  I’m starting to wonder if robots and intelligent machines will.  What will that mean?  I don’t think there is any going back, we just have to surf the changes.

NOVA has an excellent overview of Watson that you can watch online.

JWH – 2/15/11

Birds and Robots

The goal of AI scientists is to create an intelligent robot but many people feel that goal is impossible.  These people believe that the human mind is beyond nature and contains a soul that transcends our physical world.  If that is true, then the goal of silicon life is probably a fantasy.  However, if men and women are merely the most complex example of intelligent beings and leaves behind a trail of previous experiments by mother nature to fashion biological machines that can think, then there is a good chance we may one day give birth to our evolutionary descendants.

Folks who believe that man is different from the creatures of the Earth do so because they believe that animals lack our kind of intelligence and self-awareness.  Animals studies are showing more and more that our relatives on the tree of life often show cognitive traits that we once defined as the sole providence of human nature.  As intelligence and awareness are explored we’re starting to see that we homo sapiens are not that unique.

What we are learning from both robotics and animal studies is intelligence is a huge collection of tricks.  To be human actually means many things, including a fantastic repertoire of abilities, any one of which standing alone can be faked by machines or revealed in animals.  Robots can be programmed or designed to do one thing we can, and even do it better, like playing chess, but that doesn’t mean the robot is intelligent.  The same can be said of animals and their special traits.

Animals far exceed what any robot can do today, and they too are collections of abilities.  We’re starting to see robots that do more than one thing in a way that makes us see ourselves in their struggle to evolve.

Look at this video of Snowball, a head-banging Cockatoo and ask yourself if this bird is not enjoying himself rocking out to the music, and how is his response to music different from yours.

Snowball keeps better time than I do and I can’t match his dance moves.  Building a robot to dance to the beat probably would be easy for today’s robot engineers, but could we build a machine that enjoys a good downbeat as much?  Snowball stands above anything we’ve done with robots as people tower over ants.  Right now each artificial intelligence experiment struggles to create a single intelligent function that works in the most minimal of fashion.  Most people won’t think that Snowball perceives reality like a person, but if we make a list of all the things this bird can do and compare it with what we can do, there is a huge overlap.

Now look at this violin playing robot.  The robot is not aware of playing music, but it can do something that most humans can’t.

But can we say that Snowball is aware of music?  For all we know, the dance to the beat the bird is doing might be its way of showing pain, and we’re just anthropomorphizing that it’s getting down with the tune.  I don’t think so, though.

Now look at this news story about artificial intelligence to get some idea how complex the challenge of programming abilities into a machine.

Notice how many different projects this news story covers where the robot just does one simple thing.  Snowball and you have subsystems to do thousands if not millions of functions that could be considered an intelligent reaction to reality.  How did evolution program all those functions?

Now look at this video of Alex the talking parrot.  This bird seems to think.  Alex even asks for things it wants.  This is way beyond what robots can do, even though some of Alex’s tricks have been pursued in AI studies.  The question becomes can a robot ever think for itself?  Can a robot be created that learns from interacting with its environment like Alex the parrot?

Here’s a collection of videos that shows off robotic abilities.  None of these robots think for themselves, although some give the illusion they do.  Are we just highly evolved illusions?  There is a difference between perceiving or reacting to reality and being able to think about and understand reality.  Anyone who knows people who have suffered strokes or live with dementia know how fragile our unique abilities are, and how they can be taken away.  We also know how severely the body can be damaged and yet the mind inside can soar to brilliant levels, like Helen Keller or Stephen Hawking.  We have no idea what’s going on inside of the mind of an animal.  Dolphins could be just as aware and intelligent as humans.  How will we know when a robot becomes aware?

Robotics is the one area of science fiction prediction that is rushing ahead as fast as science can apply itself.  It’s not costly like manned space exploration and the general public anticipates more benefits of its results, especially in Japan.  Theoretically, an AI intelligence could be created by a high school kid in his bedroom.  How soon will we see an AI robot that has the intelligence of Alex the parrot?

If you’ve studied this concept at all you’ll know it’s not something that will be programmed.  Someone needs to invent an artificial brain that learns, and pattern recognition is the key.  Vision, hearing, taste, touch and smell are all sensory inputs that process patterns.  The brain appears to be general purpose enough to adapt the same kind of physical neural structures to handle each of these sensory pattern types.  Are we, that is our minds or our souls, a byproduct of pattern recognition?  What abilities do Alex the parrot have that scales up to become us?  Alex can hear questions, observe something in his field of view, and reply correct.  Do you see that trait in the robot films?

Spend some time and watch the film of Alex over and over.  Also watch the robot films carefully too.  Do you see patterns of behavior?

JWH 1/21/9