top of page

ARTICLE: Neural Networks - Backpropagation

  • Forfatters billede: Marco Singh
    Marco Singh
  • 30. sep. 2016
  • 1 min læsning

When I was first introduced to neural networks I found it hard to obtain a detailed description of the innner workings of the backpropagation algorithm. The backpropagation algorithm, which is the algorithm that actually makes the neural network capable of learning, is so essential that I wanted to know every single detail of the derivation (that's just how I am as a person!). This is thus what I am trying to achieve with this paper. Before diving into the derivations of the backpropagation algorithm, I will recap how the feedforward network is calculated such that a reader easily can understand the mathematical derivations and the chosen notation. After having derived the general backpropgation result, I will use a specific choice of cost function (the quadratic cost function) and a specific activation function (the sigmoid function) to obtain the backpropagation for these specific choices. The reader will then easily be able to use another choice of cost function and/or activation function. Lastly, I will show the backpropagation results when regularization is used. Concretely I will use the backpropagation when using L1 or L2 regularization.

This paper requires prior knowledge of a simple neural network, but briefly recaps the forward propagation algorithm to calculate the network prediction from initial input data.

CLICK ON THE PICTURE BELOW TO READ THE ARTICLE

Comentários


Recent Posts

© 2016 by Marco Singh. Proudly created with Wix.com 

  • Grey Twitter Icon
bottom of page