[Be the best version of yourself with Atomic Ideas 2.0 - where groundbreaking books come alive as addictive audio chats. Bite-sized brilliance, fresh ideas daily. Your pocket genius is here!
Do consider becoming a paid subscriber to gain the maximum out of AtomicIdeas]
Atomic ideas from the recently launched book, AI Snake Oil by award winning researchers Arvind Narayanan and Sayash Kapoor (BTW, we are the first ones globally to bring you a very deep summary + audiobook of this book!)
Confused about AI and worried about what it means for your future and the future of the world? You’re not alone. AI is everywhere―and few things are surrounded by so much hype, misinformation, and misunderstanding.
By revealing AI’s limits and real risks, AI Snake Oil will help you make better decisions about whether and how to use AI at work and home
In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor (also run the newsletter AI Snake Oil) cut through the confusion to give you an essential understanding of how AI works and why it often doesn’t, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil―products that don’t work, and probably never will.
While acknowledging the potential of some AI, such as ChatGPT, AI Snake Oil uncovers rampant misleading claims about the capabilities of AI and describes the serious harms AI is already causing in how it’s being built, marketed, and used in areas such as education, medicine, hiring, banking, insurance, and criminal justice.
The Double-Edged Sword of Predictive AI
While generative AI shows promise, predictive AI often falls short of its claims. Companies tout the ability to predict outcomes like job performance or criminal behavior, but evidence suggests these tools are frequently inaccurate and can exacerbate inequalities. For instance, a healthcare AI tool meant to predict patient needs actually reinforced racial biases in care.
The authors argue that many predictive AI applications are "snake oil" - products that don't work as advertised.
Listen to the audiobook summary by AtomicIdeas.AI
The Need for AI Literacy
The book aims to provide readers with the tools to critically evaluate AI claims and identify "snake oil." The authors argue that understanding AI is crucial for navigating its growing influence in society.
"We think most knowledge industries can benefit from chatbots in some way. We use them ourselves for research assistance, for tasks ranging from mundane ones such as formatting citations correctly, to things we wouldn't otherwise be able to do such as understanding a jargon-filled paper in a research area we aren't familiar with."
How Predictive AI Goes Wrong
The False Promise of Predictive Accuracy
Many companies claim their predictive AI tools can accurately forecast outcomes like job performance or criminal behavior. However, these claims often fall apart under scrutiny.
The authors cite examples like COMPAS, a tool used in criminal justice that claims to predict recidivism but performs only slightly better than random guessing. They argue that the complexity of human behavior and social contexts makes accurate prediction extremely difficult, if not impossible, in many cases.
The Dangers of Automated Decision-Making
Listen to this episode with a 7-day free trial
Subscribe to NBW: Your Curiosity Copilot to listen to this post and get 7 days of free access to the full post archives.