Deep Learning Part 1: Comparison of Symbolic Deep Learning Frameworks

by Anusua Trivedi, Microsoft Data Scientist

Background and Approach

This blog series is based on my upcoming talk on re-usability of Deep Learning Models at the Hadoop+Strata World Conference in Singapore. This blog series will be in several parts – where I describe my experiences and go deep into the reasons behind my choices.

Deep learning is an emerging field of research, which has its application across multiple domains. I try to show how transfer learning and fine tuning strategy leads to re-usability of the same Convolution Neural Network model in different disjoint domains. Application of this model across various different domains brings value to using this fine-tuned model. 

In this blog (Part1), I describe and compare the commonly used open-source deep learning frameworks. I dive deep into different pros and cons for each framework, and discuss why I chose Theano for my work.

Please feel free to email me at trivedianusua23@gmail.com if you have questions.

Symbolic Frameworks

Symbolic computation frameworks (as in CNTK, MXNET, TensorFlow, Theano) are specified as a symbolic graph of vector operations, such as matrix add/multiply or convolution. A layer is just a composition of those operations. The fine granularity of the building blocks (operations) allows users to invent new complex layer types without implementing them in a low-level language (as in Caffe).

I've used different symbolic computation frameworks in my work. However, I found each of them has their pros and cons in their design and current implementation, and none of them can perfectly satisfy all needs. For my problem needs , I decided to work with Theano.

Here we compare the following symbolic computation frameworks:

Theano

  • Software: Theano
  • Creator: Université de Montréal
  • Software license: BSD license
  • Open source: Yes
  • Platform: Cross-platform
  • Written in: Python
  • Interface: Python
  • CUDA support: Yes
  • Automatic differentiation: Yes
  • Has pre-trained models: Through Lasagne's model zoo
  • Recurrent Nets: Yes
  • Convolutional Nets: Yes
  • RBM/DBNs: Yes

TensorFlow

  • Software: TensorFlow
  • Creator: Google Brain Team
  • Software license: Apache 2.0
  • Open source: Yes
  • Platform: Linux, Mac OS X,
  • Windows support on roadmap
  • Written in: C++, Python
  • Interface: Python, C/C++
  • CUDA support: Yes
  • Automatic differentiation: Yes
  • Has pre-trained models: No
  • Recurrent Nets: Yes
  • Convolutional Nets: Yes
  • RBM/DBNs: Yes

MXNET

  • Software: MXNET
  • Creator: Distributed (Deep) Machine Learning Community
  • Software license: Apache 2.0
  • Open source: Yes
  • Platform: Ubuntu, OS X, Windows, AWS, Android, iOS, JavaScript
  • Written in: C++, Python, Julia, Matlab, R, Scala
  • Interface: C++, Python, Julia, Matlab, JavaScript, R, Scala
  • CUDA support: Yes
  • Automatic differentiation: Yes
  • Has pre-trained models: Yes
  • Recurrent Nets: Yes
  • Convolutional Nets: Yes
  • RBM/DBNs: Yes

Non-symbolic frameworks

PROS:

  • Non-symbolic (imperative) neural network frameworks like torch, caffe etc. tend to have  very similar design in their computation part.
  • In terms of expressiveness, imperative frameworks with a good design can also expose graph-like interface (e.g. torch/nngraph).

CONS:

  • The main drawbacks of imperative frameworks actually lie in manual optimization. For example, in-place operation has to be manually implemented.
  • Most imperative frameworks are not designed well enough to have comparable expressiveness as symbolic frameworks.

Symbolic frameworks

PROS:

  • Symbolic frameworks can possibly infer optimization automatically from the dependency graph.
  • A symbolic framework can exploit much more memory reuse opportunities, as is well done in MXNET.
  • Symbolic frameworks can automatically compute an optimal schedule. This is explained in TensorFlow whitepaper.

CONS:

  • Available open source symbolic frameworks currently are still not good enough to beat imperative frameworks in performance.

Adding New Operations