Skip to content

Apocalypse3007/perceptron

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 

Repository files navigation

Perceptron implementation in py

Firstly, we will import NumPy for the operations like dot products and array manipulations.

import numpy as np

This is the constructor-class initialization in which we define the basic layout of Perceptron.

  • eta: The learning rate, which controls how much the weights adjust during training. A smaller value = slower learning but more stability.
  • n_iter: Number of epochs (full passes through the training data).
  • random_state: Ensures reproducibility by controlling the random number generator for weight initialization.
class Perceptron:
    def __init__(self, eta=0.01, n_iter=50, random_state=1):
        self.eta = eta
        self.n_iter = n_iter
        self.random_state = random_state

This is the training method code to train the perceptron.

  • Randomly initializes weights with small values (usually better than starting with zeros).
  • Bias b_ is initialized to 0.
  • errors_ tracks the number of misclassifications per epoch.
def fit(self, X, y):
	rgen = np.random.RandomState(self.random_state)
	self.w_ = rgen.normal(loc=0.0, scale=0.01, size=X.shape[1])
	self.b_ = 0.
	self.errors_ = []

This is the loop to update the weights as per need of the code. This is the core Perceptron Learning Rule:

  • If prediction is correct → no update
  • If wrong → adjust weights toward the true class The formula is mentioned [[3. Perceptron#How to update Weights and Biases]]
for _ in range(self.n_iter):
	errors = 0
	for xi, target in zip(X, y):
		update = self.eta * (target - self.predict(xi))
		self.w_ += update * xi
		self.b_ += update
		errors += int(update != 0.0)
	self.errors_.append(errors)
return self

This is the function for the formula of combined input as discussed in [[3. Perceptron#Working of perceptron algorithm]]

  • Computes the linear combination of inputs and weights + bias.
  • This is the input to the activation function (step function in Perceptron).
def net_input(self, X):
	return np.dot(X, self.w_) + self.b_

Activation + Prediction function .

  • Applies a unit step function: Output is 1 if activation ≥ 0, else 0
  • This makes it a binary classifier (good for problems like AND, OR, linearly separable classes).
def predict(self, X):
	return np.where(self.net_input(X) >= 0.0, 1, 0)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages