Skip to content

Like this? Share it!

Book Excerpt

10.01.24

Gary Marcus: Taming Silicon Valley

Gary Marcus: Taming Silicon Valley

The following is an excerpt of Taming Silicon Valley: How We Can Ensure That AI Works for Us by psychologist and cognitive scientist Gary F. Marcus. In it, Marcus explains how Big Tech is taking advantage of us, how AI could make things much worse, and, most importantly, what we can do to safeguard our democracy, our society, and our future.


Ever since chatbots like ChatGPT came out, the world has been obsessed. Almost everyone enjoys playing with them, and a huge number of businesses are trying to see how they can use them to save time and reduce labor costs. (So far, the biggest win has probably been in computer programming, saving programmers a lot of time typing and looking things up. There are, however, some questions about whether there may be a negative impact on code quality and security.)

Generative AI also makes terrific images, with systems like Midjourney and OpenAI’s DALL- E. As with text generation, image generation depends on a massive database (in this case images, with some associated text), and uses that to “generate” (produce) something, given a prompt. Within a space of two years, this technology has gone from a curiosity to nothing less than astonishing. The following image took me two seconds to create, with Microsoft Designer, using the prompt “Create a black and white drawing illustrating the power of Generative AI to create images”:


Computer programmers now routinely use these systems to help them write code. Researchers are trying to test their application in medicine, and science; people in industry are trying to use them for customer service. Microsoft has created a product called Copilot to use Generative AI throughout their suite of Office Apps. Almost every Fortune 500 company is trying to figure out how Generative AI might save them money.

But there are problems too. For example, as we will see later, although chatbots are super fun, they are not always reliable. Bafflingly stupid errors like the ones shown below are common.

Increasingly, it seems that early expectations were overblown. Many people initially imagined Generative AI would change the world, creating legions of “10x” employees (putatively ten time more efficient than ordinary human employees), but already quite a few companies are downscaling their expectations. A recent story in The Information was full of industry insiders with cautionary reports like “I think people got overexcited last year” and “[Customers are] struggling with [questions of] is it providing value.” Generative AI doesn’t always work as advertised.

Moreover, Generative AI is a black box that nobody fully understands. Engineers know how to build these systems, but not what they will do at any particular moment. Massive amounts of data go in, and correct answers come out—sometimes. Nobody can quite explain why Copilot occasionally writes a sentence like The sun peeked through the clouds casting a warm glow on the grassy hillside in response to a prompt to asking for rhymes with the word some, nor exactly the process by which it sometimes gets a question like that right, nor why the word climax popped as a rhyme for time, nor why the second attempt came out exactly like the first and still wrong, despite the alleged correction. Large language models are and always have been unpredictable. (Often, AI companies fix published errors like these, but new errors along similar lines inevitably pop up.)

There have also been serious ethical questions; a large fraction of the data was taken without compensating the creators, and systems like GPT- 4 rely heavily on human feedback, some of which is extracted from poorly paid human workers that The Washington Post described as working in “digital sweatshops.” Billy Perrigo at Time discovered that a team of workers in Kenya was being paid less than $2 an hour by a contractor for OpenAI to screen deeply disturbing materials.

Excerpted from Taming Silicon Valley: How We Can Ensure That AI Works For Us by Gary Marcus. Reprinted with permission from The MIT Press. Copyright 2024.

Gary Marcus is a leading voice in artificial intelligence, well known for his challenges to contemporary AI. He is a scientist and best-selling author and was founder and CEO of Geometric.AI, a machine learning company acquired by Uber. A Professor Emeritus at NYU, he is the author of five previous books, including the bestseller Guitar Zero, Kluge (one of The Economist's eight best books on the brain and consciousness), and Rebooting AI (with Ernest Davis), one of Forbes's seven must-read books on AI.