This is the first part in a series of articles where I will explain the inner workings of a neural network. I will lay the foundation for the theory behind it as well as show how a competent neural network can be written in few and easy to understand lines of Java code.
Author: Tobias Hill
Part 2 – Gradient descent and backpropagation
In this article you will learn how a neural network can be trained by using backpropagation and stochastic gradient descent. The theories will be described thoroughly and a detailed example calculation is included where both weights and biases are updated.
Part 3 – Implementation in Java
In this article you will see how the theories presented in previous two articles can be implemented in easy to understand java code. The full neural network implementation can be downloaded, inspected in detail, built upon and experimented with.
Part 4 – Better, faster, stronger
In this article we will build upon the basic neural network presented in part 3. We will add a few gems which will improve the network. Small changes with great impact. The concepts we will meet, such as Initialization, Mini batches, Parallelization, Optimizers and Regularization are no doubt things you would run into quite quickly when learning about neural networks. This article will give a guided tour-by-example.
Part 5 – Training the network to read handwritten digits
In this final article we will see what this neural network implementation is capable of. We will throw one of the most common dataset at it (MNIST) and see if we can train a neural network to recognize handwritten digits.
Extra 1 – Data augmentation
How to get 1% better accuracy on the input data with a simple data augmentation trick.
Extra 2 – A MNIST Playground
Try out a neural network which has been trained to read handwritten digits directly in your browser. See how it classifies what you draw.