A Painful Challenge to My Ego

James Wallace Harris, 2/26/24

I’m hitting a new cognitive barrier that stops me cold. It’s making me doubt myself. I’ve been watching several YouTubers report on the latest news in artificial intelligence and I’ve been amazed by their ability to understand and summarize a great amount complex information. I want to understand the same information and summarize it too, but I can’t. Struggling to do so wounds my ego.

This experience is forcing me to contemplate my decaying cognitive abilities. I had a similar shock ten years ago when I retired. I was sixty-two and training a woman in her twenties to take over my job. She blew my mind by absorbing the information I gave her as fast as I could tell her. One reason I chose to retire early is because I couldn’t learn the new programming language, framework, and IDE that our IT department was making standard. That young woman was learning my servers and old programs in a language she didn’t know at a speed that shocked and awed me. My ego figured something was up, even then, when it was obvious this young woman could think several times faster than I could. I realized that’s what getting old meant.

I feel like a little aquarium fish that keeps bumping into an invisible barrier. My Zen realization is I’ve been put in a smaller tank. I need to map the territory and learn how to live with my new limitations. Of course, my ego still wants to maximize what I can do within those limits.

I remember as my mother got older, my sister and I had to decide when and where she could drive because she wouldn’t limit herself for her own safety. Eventually, my sister and I had to take her car away. I’m starting to realize that I can’t write about certain ideas because I can’t comprehend them. Will I always have the self-awareness to know what I can comprehend and what I can’t?

This makes me think of Joe Biden and Donald Trump. Both are older than I am. Does Biden realize what he’s forgotten? Does Trump even understand he can’t possibly know everything he thinks he knows? Neither guy wants to give up because of their egos.

So, what am I not seeing about myself? I’m reminded of Charlie Gordon in the story “Flowers for Algernon,” when Charlie was in his intellectual decline phase.

Are there tools we could use to measure our own decline? Well, that’s a topic for another essay, but I believe blogging might be one such tool.

JWH

ChatGPT Isn’t an Artificial Intelligence (AI) But an Artificial Unconsciousness (AU)

by James Wallace Harris, 2/12/24

This essay is for anyone who wants to understand themselves and how creativity works. What I’m about to say will make more sense if you’ve played with ChatGPT or have some understanding of recent AI programs in the news. Those programs appear to be amazingly creative by answering ordinary questions, passing tests that lawyers, mathematicians, and doctors take, generating poems and pictures, and even creating music and videos. They often appear to have human intelligence even though they are criticized for making stupid mistakes — but then so do humans.

We generally think of our unconscious minds as mental processes occurring automatically below the surface of our conscious minds, out of our control. We believe our unconscious minds are neural functions that influence thought, feelings, desires, skills, perceptions, and reactions. Personally, I assume feelings, emotions, and desires come from an even deeper place and are based on hormones and are unrelated to unconscious intelligence.

It occurred to me that ChatGPT and other large language models are analogs for the unconscious mind, and this made me observe my own thoughts more closely. I don’t believe in free will. I don’t even believe I’m writing this essay. The keyword here is “I” and how we use it. If we use “I” to refer to our whole mind and body, then I’m writing the essay. But if we think of the “I” as the observer of reality that comes into being when I’m awake, then probably not. You might object to this strongly because our sense of I-ness feels obviously in full control of the whole shebang.

But what if our unconscious minds are like AI programs, what would that mean? Those AI programs train on billions of pieces of data, taking a long time to learn. But then, don’t children do something similar? The AI programs work by prompting it with a question. If you play a game of Wordle, aren’t you prompting your unconscious mind? Could you write a step-by-step flow chart of how you solve a Wordle game consciously? Don’t your hunches just pop into your mind?

If our unconscious minds are like ChatGPT, then we can improve them by feeding in more data and giving it better prompts. Isn’t that what we do when studying and taking tests? Computer scientists are working hard to improve their AI models. They give their models more data and refine their prompts. If they want their model to write computer programs, they train their models in more computer languages and programs. If we want to become an architect, we train our minds with data related to architecture. (I must wonder about my unconscious mind; it’s been trained on decades of reading science fiction.)

This will also explain why you can’t easily change another person’s mind. Training takes a long time. The unconscious mind doesn’t respond to immediate logic. If you’ve trained your mental model all your life on The Bible or investing money, it won’t be influenced immediately by new facts regarding science or economics.

We live by the illusion that we’re teaching the “I” function of our mind, the observer, the watcher, but what we’re really doing is training our unconscious mind like computer scientists train their AI models. We might even fool ourselves that free will exists because we believe the “I” is choosing the data and prompts. But is that true? What if the unconscious mind tells the “I” what to study? What to create? If the observer exists separate from intelligence, then we don’t have free will. But how could ChatGPT have free will? Humans created it, deciding on the training data, and the prompts. Are our unconscious minds creating artificial unconscious minds? Maybe nothing has free will, and everything is interrelated.

If you’ve ever practiced meditation, you’ll know that you can watch your thoughts. Proof that the observer is separate from thinking. Twice in my life I’ve lost the ability to use words and language, once in 1970 because of a large dose of LSD, and about a decade ago with a TIA. In both events I observed the world around me without words coming to mind. I just looked at things and acted on conditioned reflexes. That let me experience a state of consciousness with low intelligence, one like animals know. I now wonder if I was cut off from my unconscious mind. And if that’s true, it implies language and thoughts come from the unconscious minds, and not from what we call conscious awareness. That the observer and intelligence are separate functions of the mind.

We can get ChatGPT to write an essay for us, and it has no awareness of its actions. We use our senses to create a virtual reality in our head, an umwelt, which gives us a sensation that we’re observing reality and interacting with it, but we’re really interacting with a model of reality. I call this function that observes our model of reality the watcher. But what if our thoughts are separate from this viewer, this watcher?

If we think of large language models as analogs for the unconscious mind, then everything we do in daily life is training for our mental model. Then does the conscious mind stand in for the prompt creator? I’m on the fence about this. Sometimes the unconscious mind generates its own prompts, sometimes prompts are pushed onto us from everyday life, but maybe, just maybe, we occasionally prompt our unconscious mind consciously. Would that be free will?

When I write an essay, I have a brain function that works like ChatGPT. It generates text but as it comes into my conscious mind it feels like I, the viewer, created it. That’s an illusion. The watcher takes credit.

Over the past year or two I’ve noticed that my dreams are acquiring the elements of fiction writing. I think that’s because I’ve been working harder at understanding fiction. Like ChatGPT, we’re always training our mental model.

Last night I dreamed a murder mystery involving killing someone with nitrogen. For years I’ve heard about people committing suicide with nitrogen, and then a few weeks ago Alabama executed a man using nitrogen. My wife and I have been watching two episodes of Perry Mason each evening before bed. I think the ChatGPT feature in my brain took all that in and generated that dream.

I have a condition called aphantasia, that means I don’t consciously create mental pictures. However, I do create imagery in dreams, and sometimes when I’m drowsy, imagery, and even dream fragments float into my conscious mind. It’s like my unconscious mind is leaking into the conscious mind. I know these images and thoughts aren’t part of conscious thinking. But the watcher can observe them.

If you’ve ever played with the AI program Midjourney that creates artistic images, you know that it often creates weirdness, like three-armed people, or hands with seven fingers. Dreams often have such mistakes.

When AIs produce fictional results, the computer scientists say the AI is hallucinating. If you pay close attention to people, you’ll know we all live by many delusions. I believe programs like ChatGPT mimic humans in more ways than we expected.

I don’t think science is anywhere close to explaining how the brain produces the observer, that sense of I-ness, but science is getting much closer to understanding how intelligence works. Computer scientists say they aren’t there yet, and plan for AGI, or artificial general intelligence. They keep moving the goal. What they really want are computers much smarter than humans that don’t make mistakes, which don’t hallucinate. I don’t know if computer scientists care if computers have awareness like our internal watchers, that sense of I-ness. Sentient computers are something different.

I think what they’ve discovered is intelligence isn’t conscious. If you talk to famous artists, writers, and musicians, they will often talk about their muses. They’ve known for centuries their creativity isn’t conscious.

All this makes me think about changing how I train my model. What if I stopped reading science fiction and only read nonfiction? What if I cut out all forms of fiction including television and movies? Would it change my personality? Would I choose different prompts seeking different forms of output? If I do, wouldn’t that be my unconscious mind prompting me to do so?

This makes me ask: If I watched only Fox News would I become a Trump supporter? How long would it take? Back in the Sixties there was a catch phrase, “You are what you eat.” Then I learned a computer acronym, GIGO — “Garbage In, Garbage Out.” Could we say free will exists if we control the data, we use train our unconscious minds?

JWH

I’m Too Dumb to Use Artificial Intelligence

by James Wallace Harris, 1/19/24

I haven’t done any programming since I retired. Before I retired, I assumed I’d do programming for fun, but I never found a reason to write a program over the last ten years. Then, this week, I saw a YouTube video about PrivateGPT that would allow me to train an AI to read my own documents (.pdf, docx, txt, epub). At the time I was researching Philip K. Dick, and I was overwhelmed by the amount of content I was finding about the writer. So, this light bulb went off in my head. Why not use AI to help me read and research Philip K. Dick. I really wanted to feed the six volumes of collected letters of PKD to the AI so I could query it.

PrivateGPT is free. All I had to do was install it. I’ve spent days trying to install the dang program. The common wisdom is Python is the easiest programming language to learn right now. That might be true. But installing a Python program with all its libraries and dependencies is a nightmare. What I quickly learned is distributing and installing a Python program is an endless dumpster fire. I have Anaconda, Python 3.11, Visual Studio Code, Git, Docker, Pip, installed on three computers, Windows, Mac, and Linux, and I’ve yet to get anything to work consistently. I haven’t even gotten to part where I’d need the Poetry tool. I can run Python code under plain Python and Anaconda and set up virtual environments on each. But I can’t get VS Code to recognize those virtual environments no matter what I do.

Now I don’t need VS Code at all, but it’s so nice and universal that I felt I must get it going. VS Code is so cool looking, and it feels like it could control a jumbo jet. I’ve spent hours trying to get it working with the custom environments Conda created. There’s just some conceptual configuration I’m missing. I’ve tried it on Windows, Mac, and Linux just in case it’s a messed-up configuration on a particular machine. But they all fail in the same way.

I decided I needed to give up on using VS Code with Conda commands. If I continue, I’ll just use the Anaconda prompt terminal on Windows, or the terminal on Mac or Linux.

However, after days of banging my head against a wall so I could use AI might have taught me something. Whenever I think of creating a program, I think of something that will help me organize my thoughts and research what I read. I might end up spending a year just to get PrivateGPT trained on reading and understanding articles and dissertations on Philip K. Dick. Maybe it would be easier if I just read and processed the documents myself. I thought an AI would save me time, but it requires learning a whole new specialization. And if I did that, I might just end up becoming a programmer again, rather than an essayist.

This got me thinking about a minimalistic programming paradigm. This was partly inspired by seeing the video “The Unreasonable Effectiveness of Plain Text.”

Basically, this video advocates doing everything in plain text, and using the Markdown format. That’s the default format of Obsidian, a note taking program.

It might save me lot of time if I just read the six volumes of PKD’s letters and take notes over trying to teach a computer how to read those volumes and understand my queries. I’m not even sure I could train PrivateGPT to become a literary researcher.

Visual Studio Code is loved because it does so much for the programmer. It’s full of artificial intelligence. And more AI is being added every day. Plus, it’s supposed to work with other brilliant programming tools. But using those tools and getting them to cooperate with each other is befuddling my brain.

This frustrating week has shown me I’m not smart enough to use smart tools. This reminds me of a classic science fiction short story by Poul Anderson, “The Man Who Came Early.” It’s about a 20th century man who thrown back in time to the Vikings, around the year 1000 AD. He thinks he will be useful to the people of that time because he can invent all kinds of marvels. What he learns is he doesn’t even know how to make the tools, in which to make the tools, that made the tools he was used to in the 20th century.

I can use a basic text editor and compiler, but my aging brain just can’t handle more advance modern programming tools, especially if they’re full of AI.

I need to solve my data processing needs with basic tools. But I also realized something else. My real goal was to process information about Philip K. Dick and write a summarizing essay. Even if I took a year and wrote an AI essay writing program, it would only teach me a whole lot about programming, and not about Philip K. Dick or writing essays.

What I really want is for me to be more intelligent, not my computer.

JWH

Are You in Future Shock Yet?

by James Wallace Harris, 3/24/23

Back in 1970, a nonfiction bestseller, Future Shock by Alvin Toffler, was widely talked about but it’s little remembered today. With atomic bombs in the 1940s, ICBMs, and computers in the 1950s, manned space flight and landing on the Moon in the 1960s, LSD, hippies, the Age of Aquarius, civil rights, gay rights, feminism, as well as a yearly unfolding of new technologies, it was easy to understand why Toffler suggested the pace of change could lead society into a collective state of shock.

But if we could time travel back to 1970 we could quote Al Jolson to Alvin, “You ain’t seen nothing yet.” Couldn’t we? Toffler never came close to imagining the years we’ve been living since 1970. And his book was forgotten, but I think his ideas are still valid.

Future shock finally hit me yesterday when I watched the video “‘Sparks of AGI’ – Bombshell GPT-4 Paper: Fully Read w/ 15 Revelations.”

I’ve been playing around with ChatGPT for weeks, and I knew GPT 4 was coming, but I was surprised as hell when it hit so soon. Over the past few weeks, people have been writing and reporting about using ChatGPT and the general consensus was it was impressive but because it made so many mistakes we shouldn’t get too worried. GPT 4 makes far fewer mistakes. Far fewer. But it’s fixing them fast.

Watch the video! Read the report. I’ve been waiting years for general artificial intelligence, and this isn’t it. But it’s so damn close that it doesn’t matter. Starting back in the 1950s when computer scientists first started talking about AI, they kept trying to set the bar that would prove a computer could be called intelligent. An early example was playing chess. But when a computer was built to perform one of these measures and passed, computer scientists would say that test really wasn’t a true measure of intelligence and we should try X instead. Well, we’re running out of things to equate with human-level intelligence.

Most people have expected a human-level intelligent computer would be sentient. I think GPT 4 shows that’s not true. I’m not sure anymore if any feat of human intelligence needs to be tied to sentience. All the fantastic skills we admire about our species are turning out to be skills a computer can perform.

We thought we’d trump computers with our mental skills, but it might be our physical skills that are harder to give machines. Like I said, watch the video. Computers can now write books, compose music, do mathematics, paint pictures, create movies, analyze medical mysteries, understand legal issues, ponder ethics, etc. Right now AI computers configured as robots have difficulty playing basketball, knitting, changing a diaper, and things like that. But that could change just as fast as things have been changing with cognitive creativity.

I believe most people imagined a world of intelligent machines being robots that look like us — like those we see in the movies. Well, the future never unfolds like we imagine. GPT and its kind are invisible to us, but we can easily interact with them. I don’t think science or science fiction imagined how easily that interaction would be, or how quickly it would be rolled out. Because it’s here now.

I don’t think we ever imagined how distributed AI would become. Almost anything you can think of doing, you can aid your efforts right now by getting advice and help from a GPT-type AI. Sure, there are still problems, but watch the video. There are far fewer problems than last week, and who knows how many fewer there will be next week.

Future shock is all about adapting to change. If you can’t handle the change, you’re suffering from future shock. And that’s the thing about the 1970 Toffler book. Most of us kept adapting to change no matter how fast it came. But AI is going to bring about a big change. Much bigger than the internet or computers or even the industrial revolution.

You can easily tell the difference between the people who will handle this change and those who can’t. Those that do are already using AI. They embraced it immediately. We’ve been embracing pieces of AI for years. A spelling and grammar checker is a form of AI. But this new stuff is a quantum leap over everything that’s come before. Put it to use or get left behind.

Do you know about cargo cults? Whenever an advanced society met a primitive society it doesn’t go well for primitive societies. The old cultural divide was between the educated and the uneducated. Expect new divisions. And remember Clarke’s Third Law: “Any sufficiently advanced technology is indistinguishable from magic.” For many people, AI will be magic.

Right now AI can help scholars write books. Soon AI will be able to write better scholarly books than scholars. Will that mean academics giving up writing papers and books? I don’t think so. AIs, as of now, have no desires. Humans will guide them. In the near future, humans will ride jockey on AI horses.

A couple weeks ago Clarkesworld Magazine, a science fiction magazine, shut down submissions because they were being flooded with Chat-GPT-developed stories. The problem was the level of submissions was overwhelming them, but the initial shock I think for most people would be the stories would be crap. That the submitted science fiction wouldn’t be creative in a human sense. That those AI-written stories would be a cheat. But what if humans using GPT start producing science fiction stories that are better than stories only written by humans?

Are you starting to get why I’m asking you if you feel future shock yet? Be sure and watch the video.

Finally, isn’t AI just another example of human intelligence? Maybe when AIs create artificial AIs, we can call them intelligent.

JWH

I Wish I Had Been A Librarian

by James Wallace Harris, 12/8/22

I almost became a librarian. This was a long time ago. What kept me from that career was having to move to another city to get an MLS degree. Susan and I had been married for a few years, and we didn’t want to move. I worked in the Periodicals Department at Memphis State University (now the University of Memphis). I was a Periodicals clerk, which was an hourly position. I was working on my English degree and taking some undergraduate courses in library science in a program designed to produce librarians for K-12 schools. I didn’t want to work in a school, but at a university, and most universities require a Master’s of Library Science. In fact, my university required an MLS to get the job, but a second master’s in a useful subject to aid in working in a library to keep the job. This was also true of the public library at the time. And even with two master’s degrees, the pay would never be much, but I’d work in the environment I loved best.

Instead, I took a job at the College of Education setting up their network and creating a student database system to track student teaching experience. I worked there for the rest of my life, but I’ve always wished I had gotten that MLS degree and spent my 9-to-5 life in a library. When I was young I worked at the Memphis Public Library for a few months, and later at the university library for six years. I love periodicals. And I love how magazines have become available on the internet as digital scans. I have quite a collection of them. I believe my compulsive acquisition of books and magazines is caused by a gene for librarianship.

Reading Index, A History of the: A Bookish Adventure From Medieval Manuscripts to the Digital Age by Dennis Duncan has brought back my desire to work in a library. I’m not sure I can recommend this book to everyone, but if you love books and libraries it might be for you. Its subject is somewhat esoteric. Did you know that the idea of alphabetizing had to be invented? That made me wonder who came up with the idea that letters of the alphabet should have an order? Duncan didn’t cover that.

Books haven’t always been like the books we read today. When books were scrolls they didn’t have covers or even titles. A book might be written over several scrolls of paper, so if you had a bunch of scrolls, finding the one you wanted, and the part you wanted to read, could be very difficult. So early librarians started tying the scrolls together and putting them in bins. Then they learned to glue little tags of paper to the end of scrolls to identify what was in the scroll. That’s the beginning of the index. As I said, this book won’t be for everyone, but if you have the library gene it might.

What most people think of as an index, that section of the book at the back with a list of keywords and page numbers wasn’t invented right away either. When books began to be printed people got the idea of helping people find specific places in them, and the index as we know it was born. At first, the index was published separately. Then when they started being published with the book they were put in the front. It took centuries before they standardized on placing the index in the back of the book.

David Duncan’s book is mostly an amusing look at all this. He was especially delighted by discovering what I call index wars. For example, Richard Bently satirized a 1695 book by Charles Boyle by publishing an index that ridiculed Boyle’s book by how he indexed the keywords. This led to all kinds of indexing shenanigans including dirty politics. Duncan found quite of bit of indexing history in the line, “Let no damned Tory index my History!” by Whig historian Laurence Echard whose three-volume History of England was indexed by Tory sympathizer John Oldmixon.

Another bit of off-the-road history Duncan discovered was that very scholarly accused the lesser scholarly that their poor thoughts were due to reading just the index rather than the whole book when composing their writing. That’s because indexers use to put more information into their indexes.

Duncan shows many photographs of the fine art of indexing satire but it’s hard to read them because they were being written at a time before standardized spelling. Luckily he translates historical English into modern English. And the historical humor has become very dry. You’ve got to enjoy a good three-hundred-year-old in-joke to really appreciate this book, but Duncan is good at explaining them. Sometimes the humor was as crude as the silliest of Saturday Night Live skits.

Duncan eventually works his history through the centuries up until the age of Google and online indexes. This is where I wished I had worked, using computers to organize information, periodicals, and libraries. In a way, our website Classics of Science Fiction is a kind of index. We index the popularity of science fiction short stories and novels. I’m all the time thinking of things I’d like to put into databases that deal with books and magazines. Reading Duncan’s book showed me there have been bookworms with the same kind of bibliographic urges for thousands of years.

But Index, A History of the also inspired two very specific librarian-type desires. The first was triggered by Duncan’s coverage of The Spectator, a very influential publication.

Many of the journals of the eighteenth century fall into this intermediary zone, and none more so than the Spectator. Founded in 1711 – and no direct relation of modern magazine of the same name – the Spectator was a cheap, daily, single-sheet paper that featured brief essays on literature, philosophy or whatever took its writers’ fancies. Its editors were Richard Steele and Joseph Addison (whom we met in the last chapter having his Italian travelogue mauled by ironic indexers), and, although it ran only for a couple of years, it was immensely popular. The Spectator started off in a print run of 555 copies; by its tenth issue, this had ballooned to 3,000. This, however, was only a fraction of the true readership. The editors claimed that there were twenty readers to every copy, and deemed that even this was a ‘modest Computation’. The Spectator was a paper designed for the emerging public sphere, a conversation piece to be read at ‘Clubs and Assemblies, at Tea-tables, and in Coffee-Houses’.2 A paper to be read and passed on. 

What’s more, the Spectator was only the best known in a long list of similar sheets. The Tatler, the Free-Thinker, the Examiner, the Guardian, the Plain Dealer, the Flying Post – papers like these were able to capitalize on a perfect storm of rising literacy rates, the emergence of coffee-house culture, the relaxation of formerly strict printing laws, and a growing middle-class with enough leisure time to read. The eighteenth century was gearing up to be what scholars now call the age of print saturation.3 That term saturation has some interesting suggestions. Certainly, it implies excess – too much to read – but also something else: too much to keep hold of, a new disposability of printed matter. Our poor, abused quire of paper was born at the wrong time. Flicking through original copies of the Spectator preserved in the British Library, one certainly sees the signs of coffee-house use. You won’t find stains like this in a Gutenberg Bible. And yet the essays are among the finest in English: wryly elegant, impeccably learned. If you had bought the paper for self-improvement you might well want to come back to it. 

And so it was that the news-sheets found themselves being republished, almost immediately, in book form. These editions, appearing within months of their broadsheet originals, anticipated how the kind of reader who would want the full run of the Spectator would want to use it: not simply as a single sheet – a single thought – for a few minutes’ entertainment with one’s coffee, but as an archive of ideas that one might return to. Benjamin Franklin, for example, describes coming across a collected edition of the Spectator as a boy and reading it ‘over and over’, jotting down notes from it and trying to imitate its style in his own writing.4 The movement from coffee-table to bookshelf implies a different mode of reading, one of reference, reuse, of finding the thought, the phrase, the image, and bringing it into the light again. If the Spectator was to be a book it would need an index. 

The indexes to the early volumes of the Spectator, along with those of its older sister the Tatler, are a joy in themselves, full of the same ranging, generous wit as the essays they serve. Rifling through them, a century later, Leigh Hunt would compare them to ‘jolly fellows bringing burgundy out of a cellar’, giving us ‘a taste of the quintessence of [the papers’] humour’.5 Who, indeed, would not want to sample more after reading a tantalizing entry like ‘Gigglers in Church, reproved, 158’ or ‘Grinning: A Grinning Prize, 137’ or ‘Wine, not proper to be drunk by everyone that can swallow, 140’. The Tatler, meanwhile, offers us ‘Evergreen, Anthony, his collection of fig-leaves for the ladies, 100’, or ‘Love of enemies, not constitutional, 20’, or ‘Machines, modern free thinkers are such, 130’. Elsewhere, two entries run on together, oblivious to the strictures of alphabetical order: 

     Dull Fellows, who, 43 
     Naturally turn their Heads to Politics or Poetry, ibid. 

There is something at once both useless and compelling about these indexes. Is ‘Dull Fellows’, listed under the ds, really a helpful headword? Of course not. But it catches our attention, makes us want to find out more. This is as much about performance as about quick reference. Each entry is a little advertisement for the essay it points to, a sample of the wit we will find there. The Tatler and Spectator indexes belong to the same moment as the satirical indexes we saw in the last chapter, but unlike William King’s work there is nothing cruel or pointed about them. Instead, they are zany, absurd, light. ‘Let anyone read [them],’ declares Leigh Hunt, ‘and then call an index a dry thing if he can.’ The index has made itself at home in the journals of the early eighteenth century, adapting to suit their manners, their tone. Moreover, it signals the elevation of these essays produced at a gallop for the daily coffee-house sheet to something more durable, to a format that connotes value, perhaps even status. At the midpoint of the second decade of the eighteenth century, the index is primed to offer the same sheen to other genres, to epic poetry, to drama, to the emerging form of the novel. And yet, we know how this story ends. In the twenty-first century novels do not have indexes. Nor do plays. Poetry books are indexed by first line, not by subject. Why, then, was the index to fiction a short-lived phenomenon? Why did it not take? To shed some light on this question, let us turn briefly to two literary figures from the late nineteenth century, both still indexing novels long after the embers had died down on that particular experiment. What can these latecomers tell us about the problems of indexing when it comes to works of the imagination?

Duncan, Dennis. Index, A History of the: A Bookish Adventure from Medieval Manuscripts to the Digital Age (pp. 173-177). W. W. Norton & Company. Kindle Edition. 

Reading about The Spectator makes me wish I was sitting in a library compiling information from old magazines. Of course, this is partially what Duncan has done by writing his book. By the way, The Spectator can be read online at Project Gutenberg.

Another example of how Index, A History of inspires my bookish ways is when Duncan wrote about Sherlock Holmes, and how Holmes built a massive index to help him be a detective. Did Doyle/Holmes know about the zettlekasten method? Just reading this bit of Sherlock Holmes history makes me want to do an annotation of a Sherlock Holmes story to find all the hidden clues — not to solve the crime, but to see how Arthur Conan Doyle created his characters and stories. I don’t remember ever getting excited about Holmes keeping an index when I read some of the Sherlock Holmes short stories. I need to go reread them.

Some people define themselves by exotic travel, others by the gourmet meals they consume, but I find purpose in connecting words in books to words in other books. Just note the interesting details quoted from the story and what Duncan made of them.

‘Kindly look her up in my index, Doctor,’ murmured Holmes, without opening his eyes. For many years he had adopted a system of docketing all paragraphs concerning men and things, so that it was difficult to name a subject or a person on which he could not at once furnish information. In this case I found her biography sandwiched between that of a Hebrew Rabbi and a staff commander who had written a monograph upon the deep sea fishes. 

The year is 1891, the story ‘A Scandal in Bohemia’, and the person Holmes is searching for, sandwiched between the rabbi and the amateur marine biologist, is Irene Adler, opera singer, adventuress and lover of the man now standing in Holmes’ drawing room, one Wilhelm Gottsreich Sigismond von Ormstein, Grand Duke of Cassel-Felstein and hereditary King of Bohemia. The tale will find Holmes outsmarted and chastened by Adler. ‘Beaten by a woman’s wit,’ as Watson puts it. It begins, however, with Holmes coolly in control, seated in his armchair and not deigning to open his eyes, not even for a grand duke. 

It is probably no surprise that Sherlock Holmes should be an indexer. His schtick, after all, his superpower, is his encyclopedic learning, the world’s arcana: a human Google, or a walking Notes and Queries. But that would be preposterous. Besides, from the very first adventure, A Study in Scarlet, we have been informed that, in Watson’s appraisal, Holmes’ general knowledge is severely limited: ‘Knowledge of literature – nil; Philosophy – nil; Astronomy – nil; Politics – feeble . . .’ So occasionally Conan Doyle offers us a glimpse behind the curtain, a look at the system which allows Holmes his universal recall. Every now and again we see him pruning and tending his index, ‘arranging and indexing some of his recent materials’, or ‘sat moodily at one side of the fire, cross-indexing his records of crime’. It is, naturally, an alphabetical system, with a ‘great index volume’ for each letter of the alphabet. When he wants to check something on, say, vampires, he is, characteristically, too lazy to get up himself: ‘Make a long arm, Watson, and see what V has to say.’ As a line of dialogue, incidentally, isn’t this a minor masterpiece of characterization? The asymmetry of the pair’s relationship is smoothed over with chummy slang: make a long arm. Watson, the gopher, will take the book down from the shelf, but he will not be the one to see what V has to say; Holmes, of course, will do the reading, balancing the book on his knee and gazing ‘slowly and lovingly over the record of old cases, mixed with the accumulated information of a lifetime’: 

‘Voyage of the Gloria Scott’, he read. ‘That was a bad business. I have some recollection that you made a record of it, Watson, though I was unable to congratulate you upon the result. Victor Lynch, the forger. Venomous lizard or gila. Remarkable case, that! Vittoria, the circus belle. Vanderbilt and the Yeggman. Vipers. Vigor, the Hammersmith wonder.’ 

‘Good old index,’ he purrs. ‘You can’t beat it.’ The index – his index, with its smattering of everything – is the source of his mastery. 

Holmes’ alphabetical volumes represent the index unbound, not confined to a single work but looking outwards, docketing anything that might be noteworthy. It is by no means a new idea; Robert Grosseteste was practising something similar six-and-a-half centuries previously. In the Victorian period, however, it is taken up with a new intensity. Co-ordinated, resource heavy: the universal index is becoming industrialized. Looking closely at Holmes’ index, there is something charmingly, inescapably homespun about it. Victor Lynch, venomous lizard, Vittoria the circus belle: this is a rattlebag of headers: patchy, piecemeal. Like Grosseteste’s Tabula, Holmes’ index brings together the collected readings and experiences of a single, albeit extraordinary, figure – the index as personal history. But Holmes, in his way, represents the last of a kind. Not long after ‘A Scandal in Bohemia’ first appeared in the Strand Magazine, Holmes would come to be indexed himself, a recurring entry in the annual Index to Periodicals, which trawled the year’s papers, magazines and journals, keeping a record of every article. The efforts of even a Holmes or a Grosseteste appear paltry alongside a venture of this scale, available to anyone with access to a subscribing library. But how to bring such a thing into existence? That will be a three-pipe problem.

Duncan, Dennis. Index, A History of the: A Bookish Adventure from Medieval Manuscripts to the Digital Age (pp. 203-205). W. W. Norton & Company. Kindle Edition. 

JWH