Using already existing models in ML/DL libraries might be helpful in some cases. But to have better control and understanding, you should try to implement them yourself. This article shows how a CNN is implemented just using NumPy.
Convolutional neural network (CNN) is the state-of-art technique for analyzing multidimensional signals such as images. There are different libraries that already implements CNN such as TensorFlow and Keras. Such libraries isolates the developer from some details and just give an abstract API to make life easier and avoid complexity in the implementation. But in practice, such details might make a difference. Sometimes, the
1. import skimage.data 2. # Reading the image 3. img = skimage.data.chelsea() 4. # Converting the image into gray. 5. img = skimage.color.rgb2gray(img)
Reading image is the first step because next steps depend on the input size. The image after being converted into gray is shown below.
2. Preparing filters
The following code prepares the filters bank for the first conv layer (l1 for short):
A zero array is created according to the number of filters and the size of each filter. 2 filters of size 3x3 are created that is why the zero array is of size (2=num_filters, 3=num_rows_filter, 3=num_columns_filter). Size of the filter is selected to be 2D array without depth because the input image is gray and has no depth (i.e. 2D ). If the image is RGB with 3 channels, the filter size must be (3, 3, 3=depth).
The size of the filters bank is specified by the above zero array but not the actual values of the filters. It is possible to override such values as follows to detect vertical and horizontal edges.
3. Conv Layer
After preparing the filters, next is to convolve the input image by them. The next line convolves the image with the filters bank using a function called conv:
Such function accepts just two arguments which are the image and the filter bank which is implemented as below.
The function starts by ensuring that the depth of each filter is equal to the number of image channels. In the code below, the outer if checks if the channel and the filter have a depth. If a depth already exists, then the inner if checks their inequality. If there is no match, then the script will exit.
Moreover, the size of the filter should be odd and filter dimensions are equal (i.e. number of rows and columns are odd and equal). This is checked according to the following two ifblocks. If such conditions don’t met, the script will exit.
Not satisfying any of the conditions above is a proof that the filter depth is suitable with the image and convolution is ready to be applied. Convolving the image by the filter starts by initializing an array to hold the outputs of convolution (i.e. feature maps) by specifying its size according to the following code:
Because there is no stride nor padding, the feature map size will be equal to (img_rows-filter_rows+1, image_columns-filter_columns+1, num_filters) as above in the code. Note that there is an output feature map for every filter in the bank. That is why the number of filters in the filter bank (conv_filter.shape) is used to specify the size as a third argument.
The outer loop iterates over each filter in the filter bank and returns it for further steps according to this line:
If the image to be convolved has more than one channel, then the filter must has a depth equal to such number of channels. Convolution in this case is done by convolving each image channel with its corresponding channel in the filter. Finally, the sum of the results will be the output feature map. If the image has just a single channel, then convolution will be straight forward. Determining such behavior is done in such if-else block:
You might notice that the convolution is applied by a function called conv_ which is different from the conv function. The function conv just accepts the input image and the filter bank but doesn’t apply convolution its own. It just passes each set of input-filter pairs to be convolved to the conv_ function. This is just for making the code simpler to investigate. Here is the implementation of the conv_ function:
It iterates over the image and extracts regions of equal size to the filter according to this line:
Then it apply element-wise multiplication between the region and the filter and summing them to get a single value as the output according to these lines:
After convolving each filter by the input, the feature maps are returned by the conv function. The following figure shows the feature maps returned by such conv layer.
The output of such layer will be applied to the ReLU layer.
4. ReLU Layer
The ReLU layer applies the ReLU activation function over each feature map returned by the conv layer. It is called using the relu function according to the following line of code:
The relu function is implemented as follows:
It is very simple. Just loop though each element in the feature map and return the original value in the feature map if it is larger than 0. Otherwise, return 0. The outputs of the ReLU layer are shown in the next figure.
The output of the ReLU layer is applied to the max pooling layer.
5. Max Pooling Layer
The max pooling layer accepts the output of the ReLU layer and applies the max pooling operation according to the following line:
It is implemented using the pooling function as follows:
The function accepts three inputs which are the output of the ReLU layer, pooling mask size, and stride. It simply creates an empty array, as previous, that holds the output of such layer. The size of such array is specified according to the size and stride arguments as in such line:
Then it loops through the input, channel by channel according to the outer loop that uses the looping variable map_num. For each channel in the input, max pooling operation is applied. According to the stride and size used, the region is clipped and the max of it is returned in the output array according to this line:
The outputs of such pooling layer are shown in the next figure. Note that the size of the pooling layer output is smaller than its input even if they seem identical in their graphs.
6. Stacking Layers
Up to this point, the CNN architecture with conv, ReLU, and max pooling layers is complete. There might be some other layers to be stacked in addition to the previous ones as below.
The previous conv layer uses 3 filters with their values generated randomly. That is why there will be 3 feature maps resulted from such conv layer. This is also the same for the successive ReLU and pooling layers. Outputs of such layers are shown below.
The following figure shows the outputs of the previous layers. The previous conv layer accepts just a single filter. That is why there is only one feature map as output.
But remember, the output of each previous layer is the input to the next layer. For example, such lines accepts the previous outputs as their inputs.
7. Complete Code
The complete code is available in github (https://github.com/ahmedfgad/NumPyCNN). The code contains the visualization of the outputs from each layer using the Matplotlib library.
Bio: Ahmed Gad received his B.Sc. degree with excellent with honors in information technology from the Faculty of Computers and Information (FCI), Menoufia University, Egypt, in July 2015. For being ranked first in his faculty, he was recommended to work as a teaching assistant in one of the Egyptian institutes in 2015 and then in 2016 to work as a teaching assistant and a researcher in his faculty. His current research interests include deep learning, machine learning, artificial intelligence, digital signal processing, and computer vision.
Original. Reposted with permission.
- Derivation of Convolutional Neural Network from Fully Connected Network Step-By-Step
- Is Learning Rate Useful in Artificial Neural Networks?
- Avoid Overfitting with Regularization