This project implements a single-layer Perceptron and a two-layer neural network (MLP) using only NumPy, without using any high-level machine learning libraries (like TensorFlow or PyTorch).
It serves as an educational demonstration of how neural networks work under the hood.
- Understand and implement the basic components of a neural network:
- Initialization of weights and biases
- Activation functions
- Cost/loss computation
- Gradient descent (manual backpropagation)
- Training loop
- Perceptron (single layer)
- Binary classification
- Linear activation
- Two-layer neural network
- Hidden layer with non-linear activation (ReLU or sigmoid)
- Output layer with softmax or sigmoid (depending on the task)
- Manual gradient calculation and training
- Python 3.x
- NumPy
- Jupyter Notebook
- Clone the repository:
git clone https://github.com/your-username/perceptron-from-scratch.git cd perceptron-from-scratch - Open the notebook:
jupyter notebook perceptron.ipynb
- Follow the cells and run them step by step.
perceptron-lab/
├── README.md
├── requirements.txt
├── notebook/
│ └── perceptron.ipynb
├── src/
│ ├── perceptron.py
│ └── mlp.py
├── data/
│ ├── testset.hdf5
│ └── trainset.hdf5
|
├── images/
│ └──
└── .gitignore
The code includes toy data generated with NumPy (e.g. XOR, linearly separable datasets).
You can easily plug in any 2D dataset like sklearn.datasets.make_classification or make_moons.
Kodjo Jean DEGBEVI Student in Artificial Intelligence & Big Data