This repository contains the codes/ jupyter notebooks for various neural networks developed from scratch which can be used for learning purposes if you're a beginner. Codes in this repository can give you a basic intuition of how the Neural Networks are trained and tested. Some basic bitwise functions are implemented using neural networks for better understanding of the concepts. The implementation examples are kept as logic functions to understand the basics of the networks properly.
y_in = net input to neuron ( calculated as y_in = ∑(i = 1 to n)(x_i * w_i) + b)
t = target values in training sets
y = Output after applying activation function over net input
Activation functions are the functions applied to net inputs to neurons to obtain the desired outpur from the neuron.
There are several predefined activation functions but you may create your own according to the needs or use the predefined one's.
- Sigmoid function
- Relu Function
- Binary step function
- Linear
- Leaky Relu
- Tanh function
- Softmax function
These networks belong to supervised learning type networks.
These networks consists of three layers.
- Sensory Unit (input units)
- Associator Unit (hidden units/layers)
- Response Unit (Output units/layers)
Perceptron networks learning algorithm:
- Initiate weights and bias (preferably kept 0 for ease)
- For each training input vector x_i calculate y_in as y_in = w_i * x_i + b
- Apply activation function to y_in
- Compare y_in with target values.
- If condition satisfied then end w_i(new) = w_i(old)
- Else w_i(new) = w_i(old) + alpha * t * x_i
- If weights don't change after iterating over all training examples then end else continue steps 2-6