Neural networks are one of the methods for creating artificial intelligence in computers. They are a way of solving problems that are too difficult or complicated to solve using traditional algorithms and programmatic methods. Some believe that neural networks are the future of computers and ultimately, humankind.
In this article, we’ll describe how to implement a neural network in C# .NET and train the network using a genetic algorithm. Our networks will battle against each other for the survival of the fittest to solve the mathematical functions AND, OR, and XOR. While these functions may seem trivial, it provides an easy introduction to implementing the neural network with a genetic algorithm. Once the neural networks evolve to solve the easiest of mathematical functions, one could create much more powerful networks.
The Neural Network is a Brain
The neural network is modeled after, what we believe to be, the mechanics of the brain. By connecting neurons together, adding weights to the synapses, and connecting layers of neurons, the neural network simulates the processing behind the brain. Once a neural network is trained, the network itself holds the series of weights and can be considered the solution to a particular problem. By running the neural network with a series of inputs, an ouput is generated which provides the solution.
Supervised Training of a Neural Network is Boring
One of the more common methods for training a neural network is to use supervised training with backpropagation. This consists of creating a set of test cases for training and running the neural network on the training set. The neural network receives inputs from each piece in the training set and calculates the ouput. The difference between the output and desired output is calculated and the neurons weights are adjusted to minimize the difference, thus training the network. This process is repeated multiple times on each test case in the set until an acceptable error threshold is reached. Backpropagation is an acceptable method to train a neural network when you already have a set of test cases. However, if the problem you are trying to solve has too many possible cases or is too complicated to create specific test cases for, then you need an automatic approach to training the network.
Time For Battle with a Genetic Algorithm
According to evolution, the brain of a human being has evolved over millions of years. It took that long to get where we stand today. By implementing a similar algorithm to evolution, we can battle hundreds of neural networks against each other to solve a problem. The most fit of these networks can go on to create even more precise networks, until we have a satisfactory solution to the problem at hand.
The basics behind the genetic algorithm follow that of evolution. We start with a population of neural networks, assigned random weights. We determine a fitness test to run each network against. This allows us to determine how fit a neural network is to solve our problem. The most fit of the population move on to create offspring, with slightly different weights. This process can continue for as many iterations as desired.
Setting up the Neural Network
We’ll be using a C# .NET neural network library called NeuronDotNet. For our genetic algorithm, we’ll also be using a basic library, available here.
Download complete project source code.
To start our project, simply create a basic Visual Studio Console Application with C# .NET. You’ll define your program body as follows:
In the above code, we’re using the Neural Network library to create our network. We’re actually training the network using backpropagation for starters, but we’ll change this to a genetic algorithm on the next step. Notice that we have a variable to represent our brain, called network. We then create 3 layers for our network. It’s important to note that to solve the functions AND and OR we actually only need 2 layers (input and output). However, to solve the XOR function, we’ll need an additional hidden layer. You can review the mathematics behind this for details, but we’ll skip it here.
The C# neural network library then requires us to create connections between the layers. We do this by instantiating the BackpropagationConnector objects for each layer. Once linked, we call Initialize() to assign random values to the weights of the neurons.
We then proceed to create a training set for the function AND and train the network with backpropagation.
The AND function
The AND function is the same as multiplication and performs as follows, when given two binary digits:
0 0 = 0
0 1 = 0
1 0 = 0
1 1 = 1
Our training set contains these four cases and the desired output. When we call the Learn() method on our neural network, the network learns how to arrive at the desired output when given each of the input values for AND. Once complete, our brain can perfectly perform the AND function.
Neural Network Output to AND
Input 1: 0
When running the program as shown above, we provide two binary digits as input and receive an output from the brain. The brain responds with a value from 0 to 1. The closer the value is to 1, the more of a “YES” the value can be considered. The closer the value is to 0, the more of a “NO” the value can be considered. In the above output, you can see how providing 0,0 resulted in the network printing 0.005, which when rounded, is 0. This is the correct answer to 0 AND 0. The same follows for the remaining cases. Most notably, when we provide 1, 1, the network responds with 0.957, which when rounded is 1. This is the correct answer to 1 AND 1.
Backpropagation is Interesting, But Lets Get to the Brain Wars
The backpropagation technique was shown above to let you see that once we create a genetic algorithm to match the brains against each other for survival of the fittest, we’ll get the same, if not better, results from the winning neural network. It may seem mysterious how the genetic algorithm actually works, but the key is that the best will survive.
Adding the Code for the Genetic Algorithm
Modify the above code example by removing the training section of code and replacing it as follows:
static void Main(string args)
We’ll fill in the helper functions, including the fitnessFunction in a moment, but first a few notes on the above code. Notice that we’ve replaced the neural network training section with a genetic algorithm training method. We instantiate the genetic algorithm with a crossover of 50%, mutation rate of 1%, population size of 100, epoch length of 2,000 iterations, and the number of weights at 12. While the other numbers are variable, the last number is not. This one must match the exact number of weights used in your neural network. Since our network consists of 3 layers (input, hidden, and output) with 2 neurons at the input layer, 2 neurons in the hidden layer, and 1 neuron in the output layer, a fully connected neural network would require 6 connections (also called synapses). We must double this to include bias values. This gives us a total of 12 variable weights for the network. Our genetic algorithm will take care of assigning the weights. Evolution will take care of picking the best network. We just have to worry about the setup.
After our genetic algorithm finishes its evolution epochs, we pick the best result from the final population and assign its weights to a neural network. This gives us the best brain for the AND function. We then run the same test code to try the brain out.
You’ll need the following helper functions to implement the genetic algorithm:
public static void setNetworkWeights(BackpropagationNetwork aNetwork, double weights)
The first function is simply a helper to populate the weights and bias of a neural network with a series of double values from an array (our genetic algorithms hold an array of double values). The most important function in the genetic algorithm is the Fitness Test.
The Fitness Test is the Hardest Part
The fitness test has always been the hardest part when creating a genetic algorithm. You have to determine a way to judge the fitness of a neural network, based upon its output. Even if a network fails to give the correct output, you have to provide an indication of how “correct” the network was, so that the genetic algorithm can sort the various networks in the population to know which are performing better. Even if all the neural networks in the current population perform horribly, certainly some perform better than others, even if they’re all terrible! The hardest part is that we have to determine this automatically. Luckily for our example, we can easily create a fitness test for the AND function.
In the above function fitnessFunction(), we first populate a neural network with weights from the current genetic algorithm. We then run the network 4 times, against each input possibility. We want our output to closely match 1 when the input values are 1 and 1. For everything else, we want the network to output a zero. We can tell this to the fitness function by giving points based upon the output for how close it is to our desired value. For example, if the inputs are 0, 0, we want to see a zero as close as possible. The closer the output is to zero, the higher of a score this network will get. We calculate this by adding 1 - output. So if the output was 0.8 (very close to 1, which is very incorrect since 0 AND 0 = 0), we only give a score of 1 - 0.8, which is only 0.2. On the other hand, if the output is a 0.1, which is very correct, we give a score of 1 - 0.1, which 0.9. We continue this for the other test cases.
Whenever you create a fitness function for a genetic algorithm, remember that the most important part is to provide a fine gradient score. No matter how good or bad a network is, you should be able to give some numeric indication of how far off the network is from success.
With the fitness test in place, we can now run the network and see how it does.
Neural Network Output to AND with a Genetic Algorithm
Generation 0, Best Fitness: 3
Notice in the output, our genetic algorithm advances in fitness as the populations evolve. After 2,000 epochs, our best brain had a fitness of 3.99. When we run the network, we get a very correct answer. All ouputs are 0 or less, except for 1 AND 1, which provides an output of 0.99, which when rounded is 1.
Implementing the OR Function
With our core code setup, we can easily implement the OR function by simply changing our fitnessFunction as follows:
public static double fitnessFunction(double weights)
Neural Network Output to OR with a Genetic Algorithm
Generation 0, Best Fitness: 2.99999999999995
Again, notice after 2,000 epochs, the best neural network can correctly solve the OR function. OR functions as follows:
0 0 = 0
0 1 = 1
1 0 = 1
1 1 = 1
From our output, you can see that when we input 0, 0, the network outputs zero or less. When we provide 0, 1 we receive 0.99, which when rounded equals 1. The same follows for the remaining cases.
Implementing the XOR Function
The XOR function is a little more tricky with the brain. It’s not as simple of a function as AND and OR, and actually requires the hidden layer in the neural network. Without that extra neuron, the brain simply can’t perform a correct XOR function. Since our network already has a hidden layer with the required neuron, we can implement the XOR function by simply changing our fitnessFunction as follows:
public static double fitnessFunction(double weights)
Neural Network Output to XOR with a Genetic Algorithm
Generation 0, Best Fitness: 2.39064761320888
Notice that the outputs to the XOR, while slightly less sure than the previous examples, still provide correct answers. XOR functions as follows
0 0 = 0
0 1 = 1
1 0 = 1
1 1 = 0
Our trained brain correctly solves this. When provided an input of 0, 1 the brain outputs 0.88. While this isn’t as close as 0.99, it’s still correct, as when rounded it equals 1. This brain could benefit from more evolution. We only performed 2,000 epochs and the XOR function is more complicated then the previous examples.
After running for 20,000 epochs, we obtain a best fitness of 3.84088, a noticable improvement, and the outputs are as follows:
Generation 19900, Best Fitness: 3.84087575095576
Now you can see the brain is outputing a more exact answer of 0.96 when given 0, 1 and 0.99 when given 1, 0.
AND, OR, and XOR are great, but how about something cooler?
We’ve trained our neural network with a genetic algorithm in C# .NET to perform some basic mathmetical functions. We’ve seen how the fitness test is the key behind evolving the correct neural network. It was easy to train the AND, OR, and XOR by modifying the fitness function. In fact, to train our neural network to do anything at all, we simply need to modify the fitness function and our genetic algorithm handles the rest. The genetic algorithm will actually evolve anything you want, based on the fitness function. Of course, your neural network has to have enough neurons to support the logic, but you can adjust that as needed. Just keep in mind that the more complex your neural network, the longer you’ll need to evolve the networks, and the more CPU power you’ll need for processing.
Thinking about creating the next HAL, Data, or Terminator? You just need to devise the correct fitness function with a larger neural network. Of course, while the total capabilities of the neural network aren’t fully realized yet, it’s certainly possible to push the boundaries of science.
About the Author
This article was written by Kory Becker, software developer and architect, skilled in a range of technologies, including web application development, machine learning, artificial intelligence, and data science.Sponsor Me