Neural Networks with Google CoLaboratory | Artificial Intelligence Getting started

Google Recently Launched its internal tool for collaborating on writing Data Science Code. The Project called Google CoLaboratory ( is based on the Jupyter Open Source Project and is integrated with Google Drive. Colaboratory allows users to work on Jupyter Notebooks as easily as working on Google Docs or spreadsheets.

To Start with, you can visit the CoLaboratory website and register to receive an invitation to use the tool. It usually takes a day for the confirmation mail to drop in your inbox. CoLaboratory let’s you use one of google’s virtual machines to carry out your machine learning tasks and make models without worrying about your constraints about computing power. And it’s Free.

When you first open colaboratory you are greeted with a Hello, Colaboratory file which contains some basic examples of the things you can do with it. I advice you try some of them.

With colaboratory, you can write code just as you would on a Jupyter Notebook. You write and execute (Shift + Enter) code cells and get your output underneath the cells.

Apart from writing code, this one has a couple of tricks up it’s sleeve. You can write shell commands preceded with a ‘!’ within the notebook. 
Eg., !pip install -q keras
This let’s you have a fair amount of control over the VM that google lets you use. Code Snippets for these can be found by clicking the little black button on the top left (under the menu).

I write this post with the intent to demonstrate the use of CoLaboratory for training Neural Networks. We shall go through an example where we train a Neural Net on the Breast Cancer Wisconsin Dataset made available by the UCI Machine Learning Repository. The exercise here is fairly simple.

Here is the link to the Colaboratory notebook which I have made for this article.

Deep learning is a machine learning technique which uses computation techniques that somewhat mimic the working of biological neurons. A network of neurons arranged in layers passes information back and forth from the input to output until it adjusts its weights to generate an algorithm of the underlying relationship between the features and targets.

To understand more about neural nets you can refer to this very simple paper by Carlos Greshenson. You can find online resources right here on medium as well.

For those of who that do not understand Neural Nets yet, don’t worry. The idea here is to make progress millimeter by millimeter. Continuing…

The Researchers obtained Fine Needle Aspirate (FNA) of breast mass and generated it’s digitized images. The Dataset contains instances describing the characteristics of the cell nuclei in those images. Every instance is marked with either of the two diagnosis: ‘M’ (Malignant) or ‘Benign’). Our Task is to train a Neural Network on this data to diagnose Breast Cancer given the characteristics mentioned above.

When you open colaboratory, you will be greeted with a new untitled.ipynb file at your disposal.

Google allows you to use a virtual linux machine on their servers so you can access the terminal to install specific packages for your project. If you simply pass an !ls command in a code cell (remember preceding any command by !), you will get a folder in your machine called datalab.

Our task is to get our dataset onto this machine so that our notebook can access it. You can do this with..

Once your file is on the machine, you can check if the file is there.
You should see a datalab directory and your file ‘breast_cancer_data.csv’ in list.

Data Preprocessing:

Now that our data is on our machine, let’s import it to the project using pandas.

Now, Separating the Dependent and Independent Variables.

Y consists of a single column with categories ‘M’ and ‘B’ which stand for ‘Yes’ (Malignant)and ‘No’ (Benign) respectively. These need to be Encoded to a mathematically relevant form, i.e., ‘1’ and ‘0’. This can be done with the Label Encoder class.

(Use OneHotEncoder when you encounter more than 2 categories of data)

Now that our data is prepared, let’s split it into Training and Testing sets. We use the train_test_split class in Scikit-Learn which makes the job very easy.

The parameter test_size = 0.2 defines the test set proportion. Here, 80% training and 20% test set. Moving on.

Keras is a high level API for building Artificial Neural Networks. It uses Tensorflow or Theano backend for it’s under-the-hood operations. To install Keras, you must have Tensorflow installed on your machine. Colaboratory has Tensorflow already installed on the VM. In case you want to check the installation you can use

!pip show tensorflow

To check which version of tensorflow you are using. You can also install a specific version of Tensorflow, if needed, using !pip install tensorflow==1.2

Alternatively, If you prefer using a Theano backend you can read the documentation here.

To install Keras: 
!pip install -q keras

The classes Sequential and Dense are used to specify the nodes, connections, specifications of the neural network. As seen in the above section, we will need these to customize our learning network’s parameters and tune them.

To initialize the Neural Network, we shall create an object of the Sequential Class.

Now we need to design the network.

For every hidden layer we need to define three basic parameters — units, kernel_initializer and activation. The units parameter defines the number of neurons the layer will consist. Kernel_initializer defines the initial weights which the neuron will operate on the data input (more here). And activation defines the activation function we choose to use for our data. 
Note: it is okay if these terms are overwhelming right now. In time they will get clearer.

First Layer:

For the first layer, we place 16 neurons with uniformly initialized weights activated by a ReLU activation function. Additionally, we define the parameter input_dim = 30 as specification for the input layer. Note that we have 30 features columns in our dataset.

How did we decide that the number of units in the layer? People will tell you it is an art and it comes with experience and expertise. A simple way for a beginner is to add the total number of columns in X and y and divide by 2. (30+1)/2 = 15.5 ~ 16. hence , units = 16.

Second Layer: The second layer is the same as the first layer without the input_dim parameter.

Output Layer: Since our output is either of two values, we can use a single unit with uniform initializing weight. Here however, we use a sigmoid activation function (Separate article about activation functions coming soon).

Run the Artificial Neural Network and let the backprop magic happen. You shall see all this processing on Colaboratory instead of your own machine.

Here, the batch_size is the number of number of inputs you wish to simultaneously process. And the epoch is an entire cycle in which all your data passes through the neural network once. This is how it shows up on the Colaboratory Notebook:

Making Predictions and Confusion Matrix.

Once you train your network, you can make a prediction on the X_test set (which we kept aside earlier) to check how well your model performs on new data. Type and execute cm in a cell to see it.

Confusion Matrix
The confusion matrix, as the name goes, shows a matrix representation of the correct and wrong predictions made by your model. The matrix is handy for times when you need to individually investigate which predictions are being confused with the other. This is a 2x2 Confusion Martix Explained.

This is how our confusion matrix looks. [cm (Shift+Enter)]

This is how to read it: 70 True Negatives, 1 False Positive, 1 False Negative, and 42 True Positives
It is pretty simply. The size of the square matrix increases as classes of the classification categories increase.

As with this example, we have an accuracy of almost a 100%. There are only 2 wrong predictions. Which is not bad at all. But this might not always be the case. Other times you might need to invest more time and investigate your model’s behaviour and come up with better, more complicated solutions. If a network doesn’t perform as well, Hyperparameter tuning is done to improve the model. I will write a whole article on that topic soon.

I hope this helped you get started with using Colaboratory. Find the Notebook for this tutorial here.

Note: This article focused on using Colaboratory with an example being explained. For detailed explanations of some concepts which the reader might not have understood clearly, I apologize and request you to wait for more articles.