Deep Learning Tips and Tricks

This post is a distilled collection of conversations, messages, and debates on how to optimize deep models. If you have tricks you’ve found impactful, please share them in the comments below!
c
comments

By Jonathan Balaban,
Source: www.electronicsweekly.com

Below is a distilled collection of conversations, messages, and debates I’ve had with peers and students on how to optimize deep models. If you have tricks you’ve found impactful, please share them!!

First, Why Tweak Models?

 
Deep learning models like the Convolutional Neural Network (CNN) have a massive number of parameters; we can actually call these hyper-parameters because they are not optimized inherently in the model. You could gridsearch the optimal values for these hyper-parameters, but you’ll need a lot of hardware and time. So, does a true


# dropout in input and hidden layers
# weight constraint imposed on hidden layers
# ensures the max norm of the weights does not exceed 5
model = Sequential()
model.add(Dropout(0.2, input_shape=(784,))) # dropout on the inputs
# this helps mimic noise or missing data
model.add(Dense(128, input_dim=784, kernel_initializer='normal', activation='relu', kernel_constraint=maxnorm(5)))
model.add(Dropout(0.5))
model.add(Dense(128, kernel_initializer='normal', activation='tanh', kernel_constraint=maxnorm(5)))
model.add(Dropout(0.5))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))


Dropout Best Practices:

  • Use small dropouts of 20–50%, with 20% recommended for inputs. Too low and you have negligible effects; too high and you underfit.
  • Use dropout on the input layer as well as hidden layers. This has been proven to improve deep learning performance.
  • Use a large learning rate with decay, and large momentum.
  • Constrain your weights! A big learning rate can result in exploding gradients. Imposing a constraint on network weight — such as max-norm regularization with a size of 5 — has been shown to improve results.
  • Use a larger network. You are likely to get better performance when dropout is used on a larger network, giving the model more of an opportunity to learn independent representations.

Here’s an example of final layer modification in Keras with 14 classes for MNIST:

from keras.layers.core import Activation, Dense
model.layers.pop() # defaults to last
model.outputs = [model.layers[-1].output]
model.layers[-1].outbound_nodes = []
model.add(Dense(14, activation='softmax')) 


And an example of how to freeze weights in the first five layers:

for layer in model.layers[:5]:
    layer.trainable = False


Alternatively, we can set the learning rate to zero for that layer, or use per-parameter adaptive learning algorithm like Adadelta or Adam. This is somewhat complicated and better implemented in other platforms, like Caffe.

Galleries of Pre-trained Networks:

 
Keras

  • Kaggle List
  • Keras Application
  • OpenCV Example

TensorFlow

  • VGG16
  • Inception V3
  • ResNet

Torch

  • LoadCaffe

Caffe

  • Model Zoo

View your TensorBoard graph within Jupyter

 
It’s often essential to get a visual idea of how your model looks. If you’re working in Keras, abstraction is nice but doesn’t allow you to drill down into sections of your model for deeper analysis. Fortunately, the code below lets us visualize our models directly with Python:

# From: http://nbviewer.jupyter.org/github/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/deepdream/deepdream.ipynb
# Helper functions for TF Graph visualization
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
    """Strip large constant values from graph_def."""
    strip_def = tf.GraphDef()
    for n0 in graph_def.node:
        n = strip_def.node.add() 
        n.MergeFrom(n0)
        if n.op == 'Const':
            tensor = n.attr['value'].tensor
            size = len(tensor.tensor_content)
            if size > max_const_size:
                tensor.tensor_content = bytes("<stripped %d bytes>"%size, 'utf-8')
    return strip_def
  
def rename_nodes(graph_def, rename_func):
    res_def = tf.GraphDef()
    for n0 in graph_def.node:
        n = res_def.node.add() 
        n.MergeFrom(n0)
        n.name = rename_func(n.name)
        for i, s in enumerate(n.input):
            n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
    return res_def
  
def show_graph(graph_def, max_const_size=32):
    """Visualize TensorFlow graph."""
    if hasattr(graph_def, 'as_graph_def'):
        graph_def = graph_def.as_graph_def()
    strip_def = strip_consts(graph_def, max_const_size=max_const_size)
    code = """
        <script>
          function load() {{
            document.getElementById("{id}").pbtxt = {data};
          }}
        </script>
        <link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
        <div style="height:600px">
          <tf-graph-basic id="{id}"></tf-graph-basic>
        </div>
    """.format(data=repr(str(strip_def)), +str(np.random.rand()))
  
    iframe = """
        <iframe seamless style="height:620px;border:0" srcdoc="{}"></iframe>
    """.format(code.replace('"', '"'))
    display(HTML(iframe))

# Visualizing the network graph. Be sure expand the "mixed" nodes to see their 
# internal structure. We are going to visualize "Conv2D" nodes.
graph_def = tf.get_default_graph().as_graph_def()
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)


Visualize your Model with Keras

 
This will plot a graph of the model and save it as a png file:

from keras.utils import plot_model
plot_model(model, to_file='model.png')


plot takes two optional arguments:

  • show_shapes (defaults to False) controls whether output shapes are shown in the graph.
  • show_layer_names (defaults to True) controls whether layer names are shown in the graph.

You can also directly obtain the pydot.Graph object and render it yourself, for example to show it in an ipython notebook :

from IPython.display import SVG
from keras.utils.visualize_util import model_to_dot
SVG(model_to_dot(model).create(prog='dot', format='svg'))


I hope this collection helps with your machine learning projects! Please let me know how you optimize your deep learning models in the comments below, and connect with me on Twitter and LinkedIn!

 
Bio: Jonathan Balaban is a data science nomad.

Original. Reposted with permission.

Related:

  • The Keras 4 Step Workflow
  • Improving the Performance of a Neural Network
  • Top 8 Free Must-Read Books on Deep Learning