A simple Python Library to visualize neural network

I recently created a simple Python module to visualize neural networks. This is a work based on the code contributed by Milo Spencer-Harper and Oli Blum. This module is able to:

  • Show the network architecture of the neural network (including the input layer, hidden layers, the output layer, the neurons in these layers, and the connections between neurons.)
  • Show the weights of the neural network using labels, colours and lines.

Obviously, this second feature enables the model builders to visualize the neural networks and monitor how does neural network’s training go in terms of weight adjustment. More interestingly, the rises and falls of these weights show that in the neural network’s understanding which inputs are believed to be more important than others in completing the task.

The major limitation of this Python module is that it is difficult for it to visualize a large or complex neural network as this would make the plot messy.

How to use it?

It is quite straightforward to use it. ONLY three lines of code will do the job:

import VisualizeNN as VisNN
network=VisNN.DrawNN([3,4,1]])
network.draw()

The code above will generate a visualization of a neural network (3 neurons in the input layer, 4 neurons in the hidden layer, and 1 neuron in the output layer) without weights. If you want a visualisation with weights, simply pass the weights to the DrawNN function:

network=VisNN.DrawNN(network_structure, classifier_weights)
network.draw()

How to get the weights? Well, the weights can be obtained through the classifier. For example, the weights for the Multiple Layer Perceptron classifier can be obtained via MLPClassifier.coefs_ (see the documentation of MLPClassifier).

Where to find this tool?

The best way to find the tool is to go to the repository in my GitHub home.

Gallery

The following visualization shows an artificial neural network (ANN) with 1 hidden layer (3 neurons in the input layer, 4 neurons in the hidden layer, and 1 neuron in the output layer). As you can see from the visualization, the first and second neuron in the input layer are strongly connected to the final output compared with the third neuron. This indicates that the first and second neurons are more important than the third neuron in this neural network.

This one below is an ANN with 1 hidden layer (5 neurons in the input layer, 10 neurons in the hidden layer, and 1 neuron in the output layer). As you may have noticed, the weights in these visualizations are displayed using labels, different colors and linewidths. The organge color indicates a positive weight while the blue color indicates a negative weight. Only those weights that are greater than 0.5 or lesser than -0.5 are labeled.

The last one is an ANN with 2 hidden layers (5 neurons in the input layer, 15 neurons in the hidden layer 1, 10 neurons in the hidden layer 2, and 1 neuron in the output layer).