Book Review: Learning TensorFlow

AI and deep learning are pretty hot technologies right now, what with the accelerating interest in computer vision, image recognition and classification, natural language processing (NLP), and speech recognition. Deep Neural Networks (DNNs), upon which deep learning is based, are trained with large amounts of data, and can solve complex tasks with unprecedented accuracy. TensorFlow is a leading open source software framework that helps you build and train neural networks. Here’s a nice resource to help you kick-start your use of TensorFlow – “Learning TensorFlow” by Tom Hope, Yehezkel S. Resheff and Itay Leider.

This O’Reilly book is short and sweet at 228 pages. I found it concise in how it provides a hands-on approach to TensorFlow fundamentals for a broad technical audience – from data scientists, to data engineers, to students and researchers. If you’re looking for an in-depth introduction to neural networks and deep learning however, this book is not for you. There are many other fine texts for that purpose. The purpose of the book rather, is to provide a quick introduction to the TensorFlow framework and get you up and running. I think this goal is achieved. This book is a welcome alternative to the online documentation for TensorFlow (I don’t like learning purely from online content; I need to hold a book in my hands). You’ll need to be familiar with Python programming, as code snippets are found throughout the book.

The book directs you to the MNIST handwritten digits data set to perform some machine learning and image processing. So early on, you are building Convolutional Neural Networks (CNNs) with Python and TensorFlow. Next, you’re introduced to the CIFAR10 data set. You’ll learn to train a DNN and build models to recognize images of automobiles, airplanes, and various animals with a decent 70% accuracy using TensorFlow. Some may question the use of the MNIST and CIFAR data sets (which are the same ones you’ll find discussed on the TensorFlow website), but I don’t see that as a bad thing. These are industry standard data sets and offer a certain level of familiarity while learning a new framework.

Here is a list of chapters:

Chapter 1 – Introduction

Chapter 2 – Go with the Flow: Up and Running with TensorFlow

Chapter 3 – Understanding TensorFlow Basics

Chapter 4 – Convolutional Neural Networks

Chapter 5 – Text I: Working with Text and Sequences, and TensorBoard Visualization

Chapter 6 – Text II: Word Vectors, Advanced RNN, and Embedding Visualization

Chapter 7 – TensorFlow Abstractions and Simplifications

Chapter 8 – Queues, Threads, and Reading Data

Chapter 9 – Distributed TensorFlow

Chapter 10 – Exporting and Serving Models with TensorFlow

At only 228 pages, you can’t consider this book a complete TensorFlow reference manual. However, you can use the book as a first-level reference as you dig deeper into the framework using more in-depth online resources. You’ll benefit from knowing some Python, and reasonable knowledge of computer science, machine learning, linear algebra, and statistics are almost expected.

I appreciated that the higher level abstractions are saved for later in the book (Chapter 7). Once you’ve worked through CNNs and RNNs, the book introduces contrib.learn, TFLearn and Keras for higher level abstraction. The book walks you through how to install and use these open source technologies. Additionally, no contemporary book about deep learning technology would be complete without a discussion of distributed computing (Chapter 8). This chapter walks through examples of working with clusters to compute gradients across a cluster to speed up training.

“Learning TensorFlow” represents a quick introduction to this popular deep learning framework. It won’t be your only learning resource, but it’s a great place to start. If you find yourself going through the new 5 course series for the Deep Learning Specialization on Coursera, you’ll find that TensorFlow is used regularly and this book will be a welcome resource.

Contributed by Daniel D. Gutierrez, Managing Editor and Resident Data Scientist of insideBIGDATA. In addition to being a tech journalist, Daniel also is a practicing data scientist, author, educator and sits on a number of advisory boards for various start-up companies.