
Arvind Narayanan and Sayash Kapoor
In AI Snake Oil, Princeton Professor Arvind Narayanan and PhD candidate Sayash Kapoor aim to provide the understanding that the average tech user needs to question hyped AI claims. Using a conversational tone and practical examples, they break through the complex, often obfuscatory language used by the tech world, providing ways to examine and evaluate tech in a realistic, logical way.
The authors have an impressive ability to take AI out of the realm of mysterious magic, to reveal how often what is promised doesn’t match up with what is delivered. This is particularly true in their discussion of predictive AI and its failure to accurately predict anything that isn’t governed by bounded and limited rules. They illustrate that while AI has made it possible to create a fairly accurate 6-day weather forecast, it has proven abjectly incompetent at predicting more complex things like life outcomes. They also demonstrate the serious negative impact the use of bias-riddled and pointless predictive AI has had on people’s lives.
When it comes to generative AI, the authors’ aim is to explain the inner workings so ‘we will be better mentally equipped to resist the tendency to defer to claims made by those who built it.’ Using a potted history of machine learning, they provide a glimpse into how the likes of ChatGPT was developed in a way that is impressively easy to understand. They discuss the benefits and pitfalls of text and image generation, highlighting the hidden human cost borne by low-paid annotators who make the multi-billion dollar systems function properly.
It is when they discuss Artificial General Intelligence (AGI) that Narayanan and Kapoor express the strongest opinions, arguing that ‘the bugbear of existential risk’ is a distraction from the more immediate harms of AI snake oil. Though they acknowledge that AGI is possible, they argue that people misusing AI presents a greater threat than a ‘rogue’ power-hungry sentient AI.
As a psychologist and design researcher, what impressed me most about this book is the way it combines technical knowledge of AI with an understanding of the people who are making and using it. Despite the somewhat harsh title, the narrative is far from being needlessly critical or disparaging - instead it seeks to explain why humans find technologies such as predictive AI so enticing, despite their lack of reliability. They explore the ways in which a desire to solve social problems with limited resources can make organisations susceptible to snake oil claims and provide a pragmatic assessment of how regulation can and can’t help.
This book has something to say to everyone interested in AI - from the total novice to the experienced developer. It is genuinely interesting, rooted in solid research and not overly preachy. The writers also post regularly on Substack, adding updates and depth to their various arguments. It is a welcome source of sense in a field filled with hyperbole.