“How Smart Machines Think” by Sean Gerrish is a new MIT Press book that I would recommend to two classes of people: enterprise decision makers who are charged with evaluating AI, machine learning and deep learning technologies for their companies, and on the flip side, people who are looking to transition into the field of data science but know little about it. This is a great book for people in a hurry, something to pop into your carry-on bag the next time you’re on a cross country flight. At a modest 294 pages, you could easily march through the chapters, especially since the book is written in a non-technical and approachable manner. When you complete the read, I’d estimate that you’ll have a solid, high-level understanding of these technologies that are taking the world by storm.
Sean Gerrish is a software engineer and a self-confessed machine learning geek. He’s worked as an engineer at Teza Technologies, a quantitative trading firm, and as an engineering manager for machine learning and data science teams at Google. He holds a Ph.D. in machine learning from Princeton University.
I’ve written How Smart Machines Think with the hope that it will be helpful for tech enthusiasts young and old who are curious about science and technology in general, or to industry leaders who hope to learn more about whether machine learning and artificial intelligence might be useful for their companies,” said Sean Gerrish. “This book is meant to be accessible to a broad audience – from a curious high school student to a retired mechanical engineer. Although it will help if you know a little computer science, the only real prerequisites for this book are curiosity and a bit of an attention span. And I have intentionally kept the math in this book to a minimum to communicate the core ideas without alienating casual readers.”
Gerrish weaves the stories behind the breakthroughs discussed in the 17 chapters into a compelling narrative, introducing readers to many of the researchers involved, and keeping technical details to a minimum. STEM buffs also will find this book an essential guide to a future in which machines can outsmart people.
Gerrish offers a fresh and contemporary look at AI, machine learning, and deep learning by presenting the topics in light of how the technologies have surfaced in familiar memes like the Jeopardy TV game show, Netflix, video games like StarCraft, board games like Go, chess, Sudoku, and also self-driving cars. Given the book’s emphasis in examples of how AI is deployed, after you finish reading you may think that AI is just used in games, but you’ll be advised to keep an open mind to more serious applications like robotics in manufacturing, smart healthcare, galaxy classification in astrophysics, IoT, and so much more. The more playful applications let you more easily embrace the complex technology with something familiar. If you’ve been fascinated with all the activity in the autonomous driving field these days, the first few chapters will nicely set the stage for your understanding.
The book includes the following chapters:
1 – The Secret of the Automaton
2 – Self-driving Cars and the DARPA Grand Challenge
3 – Keeping Within the Lanes: Perception in Self-driving Cars
4 – Yielding at Intersections: The Brain of a Self-driving Car
5 – NETFLIX and the Recommendation-engine Challenge
6 – Ensembles of Teams: The NETFLIX Prize Winners
7 – Teaching Computers by Giving Them Treats
8 – How to Beat ATARI Games by Using Neural Networks
9 – Artificial Neural Networks’ View of the World
10 – Looking Under the Hood of Deep Neural Networks
11 – Neural Networks that Can Hear, Speak, and Remember
12 – Understanding Natural Language (and JEOPARDY! Questions)
13 – Mining the Best JEOPARDY! Answer
14 – Brute-force Search Your Way to a Good Strategy
15 – Expert-level Play for the Game of GO
16 – Real-time AI and StarCraft
17 – Five Decades (or More) From Now
My favorite sections of the book align with important concepts that must be understood to obtain a good grasp of the AI under the hood, so as you read, make a point of focusing on the following:
Page 59 – How to Train a Classifier
Page 79, Page 131 – Overtiffing
Page 91 – Reinforcement Learning
Page 109 – Neural Networks as Mathematical Functions
Page 133 – ImageNet
Page 135 – Convolutional Neural Networks
Page 139 – Why “Deep” Networks?
Page 146 – Squashing Functions
Page 148 – Relu Activation Functions
Page 159 – Recurrent Neural Networks
Page 167 – Long Short-term Memory
Sign up for the free insideBIGDATA newsletter.