The following is excerpted from The Line: AI and The Future of Personhood by James Boyle. Copyright © 2024 by James Boyle. Reprinted with permission from MIT Press.
In June of 2022 a man called Blake Lemoine told reporters at The Washington Post that he thought the computer system he worked with was sentient. By itself, that does not seem strange. The Post is one of the United States’ finest newspapers and its reporters are used to hearing from people who think that the CIA is attempting to read their brainwaves or that prominent politicians are running a child sex trafficking ring from the basement of a pizzeria. (It is worth noting that the pizzeria had no basement.) But Mr. Lemoine was different; For one thing, he was not some random person off the street. He was a Google engineer. Google has since fired him. For another thing, the “computer system” was LaMDA, Google’s Language Model for Dialogue Applications—that is, an enormously sophisticated chatbot. Imagine a software system that vacuums up billions of pieces of text from the internet and uses them to predict what the next sentence in a paragraph or the answer to a question would be.
Mr. Lemoine worked for Google’s Responsible AI division and his job was to have “conversations” with LaMDA to see if the system could be gamed to produce discriminatory or hateful speech. As these conversations proceeded, he started to believe—as the Post put it—that there was “a ghost in the machine,” a sentience that lay behind the answers he was receiving. He stressed encounters in which LaMDA distinguished itself from mere programmed chatbots. For example, “I use language with understanding and intelligence. I don’t just spit out responses that had been written in the database based on keywords.” Understandably, as a Large Language Model (“LLM”), LaMDA claimed that language was central to being human. Like the philosophers and computer scientists consulted, I think Mr. Lemoine is entirely wrong that LaMDA is sentient. To quote Professor Emily Bender, a computational linguistics scholar, “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them.” To be clear, this is not human level AI and it is not conscious. Will that always be true?
For all of our alarms, excursions and moral panics about artificial intelligence, we have devoted surprisingly little time to thinking about the possible personhood of the new entities this century will bring us. We agonize about the effect of artificial intelligence on employment, or the threat that our creations will destroy us. But what about their potential claims to be inside the line, to be “us,” not machines or animals but, if not humans, then at least persons—deserving all the moral and legal respect that any other person has by virtue of their status? Our prior history in failing to recognize the humanity and legal personhood of members of our own species does not exactly fill one with optimism about our ability to answer the question well off-the-cuff.
Although he was wrong, Mr. Lemoine offers us a precious insight. The days of disputing whether consciousness or personhood are possessed, should be possessed, by entities other than us? Even by machines? Those days are arriving—not as science fiction or philosophical puzzler but as current controversy. Those days will be our days, and this is a book about them.
To read the entire book, download the PDF or purchase a hard copy.