Neural Networks Implementation


Learning of the neural network takes place on the idea of a sample of the population under study. During the course of learning, compare the worth delivered by the output unit with actual value. then adjust the weights of all units so as to improve the prediction.

There are many Neural Network Algorithms available for training Artificial Neural Networks. allow us to now see some important Algorithms for training Neural Networks:

Gradient Descent — wont to find the local minimum of a function.

Evolutionary Algorithms — supported the concept of survival or survival of the fittest in Biology.

Genetic Algorithm — Enable the foremost appropriate rules for the answer of a drag and choose it. So, they send their ‘genetic material’ to ‘child’ rules. we'll study them intimately below.

Gradient Descent

We use the gradient descent algorithm to seek out the local smallest of a function. The Neural Network Algorithm converges to the local smallest. By approaching proportional to the negative of the gradient of the function. to seek out local maxima, take the steps proportional to the positive gradient of the function. This is often a gradient ascendant process.

In linear models, the error surface may be a well defined and documented mathematical object within the shape of a parabola. Then find the smallest amount point by calculation. Unlike linear models, neural networks are complex nonlinear models. Here, the error surface has an irregular layout, crisscrossed with hills, valleys, plateau, and deep ravines. To seek out the last point on this surface, that no maps are available, the user must explore it.

In this Neural Network Algorithm, you progress over the error surface by following the road with the best slope. It also offers the likelihood of reaching rock bottom possible points. You then need to compute at the optimal rate at which you ought to travel down the slope.

The correct speed is proportional to the slope of the surface and therefore the learning rate. Learning rate controls the extent of modification of the weights during the training process.

Hence, the instant of a neural network can affect the performance of a multilayer perceptron.

Evolutionary Algorithms

This algorithm is predicated on the concept of survival or survival of the fittest in Biology. The concept of survival states that — for a given population, environment conditions use a pressure that leads to the increase of the fittest therein population.

To measure fittest during a given population, you'll apply a function as an abstract measure.

In the context of evolutionary algorithms, refer recombination to as an operator. Then apply it to 2 or more candidates referred to as parents, and end in one or more new candidates referred to as children. Apply the mutation on one candidate and leads to a replacement candidate. By applying recombination and mutation, we will get a group of latest candidates to put within the next generation that supports their fittest measure.

The two basic elements of evolutionary algorithms in Neural Network are:

Variation operators (recombination and mutation)
Selection process (selection of the fittest)

The common features of evolutionary algorithms are:

Evolutionary algorithms are population-based.
Evolutionary algorithms use recombination to combine candidates of a population and make new candidates.
On random selection evolutionary algorithm based.

Hence, on the idea of details and applied problems, we use various formats of evolutionary algorithms.

Some common evolutionary algorithms are:

Genetic Algorithm — It provides the answer for optimization problems. It provides the answer with the assistance of natural evolutionary processes. Like mutation, recombination, crossover, and inheritance.

https://github.com/Shirisha-D/GeneticAlgorithm.git

Can review the example!!!

Genetic Programming — genetic programming provides an answer within the sort of computer programs. By the power to unravel computational problems, accuracy of a program measures.

https://github.com/Shirisha-D/gplearn

Evolutionary Programming — during a simulated environment to develop the AI we use it.

Evolution Strategy it's an optimization algorithm. Grounded on the concepts of the difference and therefore the evolution in biology. https://deap.readthedocs.io/en/master/

Neuroevolution — to coach neural networks we use Neuroevolution. By specifying structure and connection weights genomes are wont to develop neural networks.

In all these Neural Network Algorithms, a genetic algorithm is the commonest evolutionary algorithm.

Genetic Algorithm

Genetic algorithms, developed by John Holland’s group from the first 1970s. It enables the foremost appropriate rules for the answer of a drag to be selected. in order that they send their ‘genetic material’ (their variables and categories) to ‘child’ rules.

Here refer sort of a set of categories of variables. for instance , customers aged between 36 and 50, having financial assets of but $20,000 and a monthly income of quite $2000.

A rule is that the equal of a branch of a choice tree; it's also analogous to a gene. you'll understand genes as units inside cells that control how living organisms inherit features of their parents. Thus, Genetic algorithms aim to breed the mechanisms of survival . By selecting the principles best adapted to prediction and by crossing and mutating them until getting a predictive model.

Together with neural networks, they form the second sort of algorithm. Which mimics natural mechanisms to elucidate phenomena that aren't necessarily natural.

The steps for executing genetic algorithms are:

Step 1: Random generation of initial rules — Generate the principles first with the constraint being that they need to be all distinct. Each rule contains a random number of variables chosen by a user.

Step 2: Selection of the simplest rules — Check the principles in sight of the aim by the fitness function to guide the evolution toward the simplest rules. Best rules maximize the fitness function and retain with the probability that increases because the rule improves. Some rules will disappear while others select several times.

Step 3: Generation of latest rules by mutation or crossing — First, attend step 2 until the execution of the algorithm stops. Chosen rules are randomly mutated or crossed. The mutation is that the replacement of a variable or a category of an ingenious rule with another.

A crossing of two rules is the exchange of a number of their variables or categories to supply 2 new rules. A crossing is more common than mutation.

Neural Network Algorithms ends when 1 of the subsequent 2 conditions meets:

A specified number of iterations that reached.
Starting from the generation of rank n, rules of generations n, n-1, and n-2 are (almost) identical.

So, this was all about Neural Network Algorithms. Hope you wish for our explanation.

Creating our own simple neural network

Let’s create a neural network from scratch with Python (3.x in the example below).

import numpy, random, os

lr = 1 #learning rate

bias = 1 #value of bias


weights = [random.random(),random.random(),random.random()] #weights generated in a list (3 weights in total for 2 neurons and the bias)

The beginning of the program just defines libraries and therefore the values of the parameters, and creates an inventory which contains the values of the weights which will be modified (those are generated randomly).

def Perceptron(input1, input2, output) :

outputP = input1*weights[0]+input2*weights[1]+bias*weights[2]

if outputP > 0 : #activation function (here Heaviside)

outputP = 1

else :
outputP = 0

error = input – outputP

weights[0] += error * input1 * lr

weights[1] += error * input2 * lr

weights[2] += error * bias * lr


Here we create a function which defines the work of the output neuron. It takes 3 parameters (the 2 values of the neurons and therefore the expected output). “outputP” is the variable like the output given by the Perceptron. Then we calculate the error, and want to modify the weights of each connection to the output neuron right after.

for i in range(50) :

Perceptron(1,1,1) #True or true

Perceptron(1,0,1) #True or false

Perceptron(0,1,1) #False or true

Perceptron(0,0,0) #False or false


We create a loop that makes the neural network repeat every situation several times. This part is the learning phase. The number of iteration is chosen according to the precision we want. However, be aware that too much iterations could lead the network to over-fitting, which causes it to focus too much on the treated examples, so it couldn’t get a right output on case it didn’t see during its learning phase.

However, our case here is a bit special, since there are only 4 possibilities, and we give the neural network all of them during its learning phase. A Perceptron is supposed to give a correct output without having ever seen the case it is treating.

x = int(input())

y = int(input())

outputP = x*weights[0] + y*weights[1] + bias*weights[2]

if outputP > 0 : #activation function

outputP = 1
else :

outputP = 0

print(x, "or", y, "is : ", outputP)


Finally, we can ask the user to enter himself the values to check if the Perceptron is working. This is the testing phase.

The activation function Heaviside is interesting to use in this case, since it takes back all values to exactly 0 or 1, since we are looking for a false or true result. We could try with a sigmoid function and obtain a decimal number between 0 and 1, normally very close to one of those limits.

outputP = 1/(1+numpy.exp(-outputP)) #sigmoid function

We could also save the weights that the neural network just calculated in a file, to use it later without making another learning phase. It is done for a bigger project, in which that phase can last days or weeks.


Artificial Neural Network is usually difficult to configure and slow to coach , but once prepared are in no time within the application. They're generally designed as models to beat the mathematical, computational, and engineering problems since there's tons of research in mathematics, neurobiology and computing. 

Follow, Subscribe and share it with your friends for more interesting content on Deep Learning.





Comments

Popular posts from this blog

Our First Workshop Went Like...!

Convolutional Neural Networks

Classification of Animals Using CNN Model