Deep Learning Fundamentals: Practice & Implementation This repository contains my journey through Deep Learning. It includes code provided in campusx yt tutorial as well as my own personal implementations, experiments, and notes as I worked through the concepts.
🚀 Overview The goal of this project was to move beyond theoretical knowledge and gain hands-on experience with neural networks, optimization techniques, and various deep learning architectures.
📂 Repository Structure
practice notebooks/: my practice notebooks as well as from tutorial
deep_learning_notes/: Short summaries of the key concepts learned in each module.
🧠 Concepts Covered Throughout this repository, I have implemented and explored:
Neural Network Basics: Perceptrons, activation functions (ReLU, Sigmoid, Softmax).
Backpropagation: Understanding how gradients flow through a network to update weights.
Optimization: Stochastic Gradient Descent (SGD), Adam, and RMSProp.
Loss Functions: Cross-Entropy for classification and MSE for regression.
Architectures:
MLP: Multi-Layer Perceptrons.
CNN: Convolutional Neural Networks for image processing.
RNN/LSTM: For sequential data and time-series analysis.
Transformer Architecture in Detail
🛠️ Tools & Frameworks Language: Python 3.x
Libraries: TensorFlow, NumPy, Pandas, Matplotlib, Scikit-learn.
📈 Key Learnings Hyperparameter Tuning: I learned how sensitive models are to learning rates and batch sizes.
Overfitting: Implemented Dropout and L2 Regularization to improve model generalization.
Data Preprocessing: The importance of normalization and data augmentation in training robust models.
🔗 Credits The base code and educational structure were provided by:
Tutorial Name: CampusX 100 days of deep learning
How to Use Clone the repo: git clone https://github.com/asim-shah-web/deep-learning-practice.git