Loading lesson path
Imagine a straight line in a space with scattered x y points. Train a perceptron to classify the points over and under the line.
Create a Perceptron object. Name it anything (like Perceptron). Let the perceptron accept two parameters: The number of inputs (no) The learning rate (learningRate). Set the default learning rate to 0.00001.
Formula
Then create random weights between - 1 and 1 for each input.// Perceptron Object function Perceptron(no, learningRate = 0.00001) {
// Set Initial Values this.learnc = learningRate;
this.bias = 1;
// Compute Random Weights this.weights = [];
for (let i = 0; i <= no; i++) {
this.weights[i] = Math.random() * 2 - 1;
}// End Perceptron Object
}The Perceptron will start with a random weight for each input.
For each mistake, while training the Perceptron, the weights will be adjusted with a small fraction. This small fraction is the "
". In the Perceptron object we call it learnc.
Sometimes, if both inputs are zero, the perceptron might produce an incorrect output. To avoid this, we give the perceptron an extra input with the value of 1. This is called a bias.
Example this.activate = function(inputs) {
let sum = 0;
for (let i = 0; i < inputs.length; i++) {
sum += inputs[i] * this.weights[i];
}
if (sum > 0) {return 1} else {return 0}
}1 if the sum is greater than 0 0 if the sum is less than 0
The training function guesses the outcome based on the activate function. Every time the guess is wrong, the perceptron should adjust the weights. After many guesses and adjustments, the weights will be correct.
Example this.train = function(inputs, desired) {
inputs.push(this.bias);
let guess = this.activate(inputs);
let error = desired - guess;
if (error != 0) {
for (let i = 0; i < inputs.length; i++) {
this.weights[i] += this.learnc * error * inputs[i];
}
}
}After each guess, the perceptron calculates how wrong the guess was. If the guess is wrong, the perceptron adjusts the bias and the weights so that the guess will be a little bit more correct the next time. This type of learning is called backpropagation. After trying (a few thousand times) your perceptron will become quite good at guessing.
// Perceptron Object function Perceptron(no, learningRate = 0.00001) {
// Set Initial Values this.learnc = learningRate;
this.bias = 1;
// Compute Random Weights this.weights = [];
for (let i = 0; i <= no; i++) {
this.weights[i] = Math.random() * 2 - 1;
}
// Activate Function this.activate = function(inputs) {
let sum = 0;
for (let i = 0; i < inputs.length; i++) {
sum += inputs[i] * this.weights[i];
}
if (sum > 0) {return 1} else {return 0}
}
// Train Function this.train = function(inputs, desired) {
inputs.push(this.bias);
let guess = this.activate(inputs);
let error = desired - guess;
if (error != 0) {
for (let i = 0; i < inputs.length; i++) {
this.weights[i] += this.learnc * error * inputs[i];
}
}
}// End Perceptron Object
}Now you can include the library in HTML: <script src="myperceptron.js"></script>