diff --git a/COURSE_STRUCTURE.md b/COURSE_STRUCTURE.md
new file mode 100644
index 0000000..2a38e60
--- /dev/null
+++ b/COURSE_STRUCTURE.md
@@ -0,0 +1,253 @@
+# Course Structure Documentation
+
+## Overview
+
+The learning course has been reorganized to use **markdown files** stored in the `public/content/learn/` directory, following the same pattern as the blog posts. This makes it easy to manage content and add images.
+
+## ๐ File Structure
+
+```
+public/content/learn/
+โโโ README.md # Documentation for content management
+โโโ math/
+โ โโโ functions/
+โ โ โโโ functions-content.md
+โ โ โโโ [add your images here]
+โ โโโ derivatives/
+โ โ โโโ derivatives-content.md
+โ โ โโโ derivative-graph.png
+โ โ โโโ tangent-line.png
+โ โโโ vectors/
+โ โ โโโ vectors-content.md
+โ โ โโโ [images included]
+โ โโโ matrices/
+โ โ โโโ matrices-content.md
+โ โ โโโ [images included]
+โ โโโ gradients/
+โ โโโ gradients-content.md
+โ โโโ [images included]
+โโโ neural-networks/
+ โโโ introduction/
+ โ โโโ introduction-content.md
+ โ โโโ [add your images here]
+ โโโ forward-propagation/
+ โ โโโ forward-propagation-content.md
+ โ โโโ [add your images here]
+ โโโ backpropagation/
+ โ โโโ backpropagation-content.md
+ โ โโโ [add your images here]
+ โโโ training/
+ โโโ training-content.md
+ โโโ [add your images here]
+```
+
+## ๐ Course Modules
+
+### Module 1: Mathematics Fundamentals
+
+1. **Functions** (`/learn/math/functions`)
+ - Linear functions
+ - Activation functions (Sigmoid, ReLU, Tanh)
+ - Loss functions
+ - Why non-linearity matters
+
+2. **Derivatives** (`/learn/math/derivatives`)
+ - What derivatives are
+ - Why they matter in AI
+ - Common derivative rules
+ - Practical examples with loss functions
+
+3. **Vectors** (`/learn/math/vectors`)
+ - What vectors are (magnitude and direction)
+ - Vector components and representation
+ - Vector operations (addition, scalar multiplication)
+ - Applications in machine learning
+
+4. **Matrices** (`/learn/math/matrices`)
+ - Matrix fundamentals
+ - Matrix operations (multiplication, transpose)
+ - Matrix transformations
+ - Role in neural networks
+
+5. **Gradients** (`/learn/math/gradients`)
+ - Understanding gradients
+ - Partial derivatives
+ - Gradient computation
+ - Gradient descent in optimization
+
+### Module 2: Neural Networks from Scratch
+
+1. **Introduction** (`/learn/neural-networks/introduction`)
+ - What neural networks are
+ - Basic architecture (input, hidden, output layers)
+ - How they learn
+ - Real-world applications
+
+2. **Forward Propagation** (`/learn/neural-networks/forward-propagation`)
+ - The forward pass process
+ - Weighted sums and activations
+ - Step-by-step numerical examples
+ - Matrix operations
+
+3. **Backpropagation** (`/learn/neural-networks/backpropagation`)
+ - The backpropagation algorithm
+ - Chain rule in action
+ - Gradient computation
+ - Common challenges (vanishing/exploding gradients)
+
+4. **Training & Optimization** (`/learn/neural-networks/training`)
+ - Gradient descent variants (SGD, mini-batch, batch)
+ - Advanced optimizers (Adam, RMSprop, Momentum)
+ - Hyperparameters and learning rate schedules
+ - Best practices and common pitfalls
+
+## ๐ ๏ธ Technical Implementation
+
+### Components Created
+
+1. **LessonPage Component** (`components/lesson-page.tsx`)
+ - Reusable component that loads markdown content
+ - Handles frontmatter parsing
+ - Supports navigation between lessons
+ - Similar to blog post structure
+
+2. **Page Routes** (`app/learn/...`)
+ - Each lesson has a simple page component
+ - Uses `LessonPage` with configuration
+ - Clean and maintainable
+
+### How It Works
+
+1. **Markdown files** are stored in `public/content/learn/[category]/[lesson]/`
+2. Each file has **frontmatter** with hero data (title, subtitle, tags)
+3. **Images** are placed alongside the markdown files
+4. **Page components** load the markdown using the `LessonPage` component
+5. Images are referenced as `` and served from `/content/learn/...`
+
+### Example Markdown Frontmatter
+
+```markdown
+---
+hero:
+ title: "Understanding Derivatives"
+ subtitle: "The Foundation of Neural Network Training"
+ tags:
+ - "๐ Mathematics"
+ - "โฑ๏ธ 10 min read"
+---
+
+# Your content here...
+
+
+```
+
+## ๐ Adding New Content
+
+### To Add a New Lesson:
+
+1. **Create folder structure:**
+ ```bash
+ mkdir -p public/content/learn/[category]/[lesson-name]
+ ```
+
+2. **Create markdown file:**
+ ```bash
+ touch public/content/learn/[category]/[lesson-name]/[lesson-name]-content.md
+ ```
+
+3. **Add frontmatter and content** to the markdown file
+
+4. **Add images** to the same folder
+
+5. **Create page component:**
+ ```tsx
+ // app/learn/[category]/[lesson-name]/page.tsx
+ import { LessonPage } from "@/components/lesson-page";
+
+ export default function YourLessonPage() {
+ return (
+
+ {language === 'en' + ? 'Master the fundamentals and publish your own papers' + : 'ๆๆกๅบ็ก็ฅ่ฏ๏ผๆๅปบไฝ ่ชๅทฑ็็ฅ็ป็ฝ็ป'} +
++ {language === 'en' + ? 'Under active development, some parts are AI generated and not reviewed yet. In the end everything will be carefully reviewed and rewritten by humans to the highest quality.' + : 'ๆญฃๅจ็งฏๆๅผๅไธญ๏ผ้จๅๅ ๅฎน็ฑAI็ๆๅฐๆชๅฎกๆ ธใๆ็ปๆๆๅ ๅฎน้ฝๅฐ็ฑไบบๅทฅไป็ปๅฎกๆ ธๅ้ๅ๏ผ็กฎไฟๆ้ซ่ดจ้'} +
++ {language === 'en' ? 'Essential math concepts for AI' : 'AIๅฟ ๅค็ๆฐๅญฆๆฆๅฟต'} +
++ {language === 'en' + ? 'Linear, quadratic, and activation functions' + : '็บฟๆงใไบๆฌกๅๆฟๆดปๅฝๆฐ'} +
+ + + ++ {language === 'en' + ? 'Understanding rates of change and gradients' + : '็่งฃๅๅ็ๅๆขฏๅบฆ'} +
+ + + ++ {language === 'en' + ? 'Understanding magnitude, direction, and vector operations' + : '็่งฃๅคงๅฐใๆนๅๅๅ้่ฟ็ฎ'} +
+ + + ++ {language === 'en' + ? 'Matrix operations and transformations' + : '็ฉ้ต่ฟ็ฎๅๅๆข'} +
+ + + ++ {language === 'en' + ? 'Partial derivatives and gradient descent' + : 'ๅๅฏผๆฐๅๆขฏๅบฆไธ้'} +
+ ++ {language === 'en' ? 'Working with tensors and PyTorch basics' : 'ไฝฟ็จๅผ ้ๅPyTorchๅบ็ก'} +
++ {language === 'en' + ? 'Building blocks of deep learning' + : 'ๆทฑๅบฆๅญฆไน ็ๅบๆฌๆๅปบๅ'} +
+ + + ++ {language === 'en' + ? 'Element-wise operations on tensors' + : 'ๅผ ้็้ๅ ็ด ่ฟ็ฎ'} +
+ + + ++ {language === 'en' + ? 'The core operation in neural networks' + : '็ฅ็ป็ฝ็ปไธญ็ๆ ธๅฟ่ฟ็ฎ'} +
+ + + ++ {language === 'en' + ? 'Flipping dimensions and axes' + : '็ฟป่ฝฌ็ปดๅบฆๅ่ฝด'} +
+ + + ++ {language === 'en' + ? 'Changing tensor dimensions' + : 'ๆนๅๅผ ้็ปดๅบฆ'} +
+ + + ++ {language === 'en' + ? 'Accessing and extracting tensor elements' + : '่ฎฟ้ฎๅๆๅๅผ ้ๅ ็ด '} +
+ + + ++ {language === 'en' + ? 'Combining multiple tensors' + : '็ปๅๅคไธชๅผ ้'} +
+ + + ++ {language === 'en' + ? 'Zeros, ones, identity matrices and more' + : '้ถๅผ ้ใๅไฝๅผ ้ใๅไฝ็ฉ้ต็ญ'} +
+ ++ {language === 'en' ? 'Understanding the fundamental unit of neural networks' : '็่งฃ็ฅ็ป็ฝ็ป็ๅบๆฌๅๅ '} +
++ {language === 'en' + ? 'The basic building block of neural networks' + : '็ฅ็ป็ฝ็ป็ๅบๆฌๆๅปบๅ'} +
+ + + ++ {language === 'en' + ? 'Weighted sums and bias in neurons' + : '็ฅ็ปๅ ไธญ็ๅ ๆๅๅๅ็ฝฎ'} +
+ + + ++ {language === 'en' + ? 'Introducing non-linearity to neurons' + : 'ไธบ็ฅ็ปๅ ๅผๅ ฅ้็บฟๆง'} +
+ + + ++ {language === 'en' + ? 'Implementing a single neuron from scratch' + : 'ไป้ถๅผๅงๅฎ็ฐๅไธช็ฅ็ปๅ '} +
+ + + ++ {language === 'en' + ? 'How a neuron processes input to output' + : '็ฅ็ปๅ ๅฆไฝๅค็่พๅ ฅๅฐ่พๅบ'} +
+ + + ++ {language === 'en' + ? 'Measuring prediction error' + : 'ๆต้้ขๆต่ฏฏๅทฎ'} +
+ + + ++ {language === 'en' + ? 'How neurons adjust their parameters' + : '็ฅ็ปๅ ๅฆไฝ่ฐๆดๅ ถๅๆฐ'} +
+ ++ {language === 'en' ? 'Understanding different activation functions' : '็่งฃไธๅ็ๆฟๆดปๅฝๆฐ'} +
++ {language === 'en' + ? 'Rectified Linear Unit - The most popular activation function' + : 'ไฟฎๆญฃ็บฟๆงๅๅ - ๆๆต่ก็ๆฟๆดปๅฝๆฐ'} +
+ + + ++ {language === 'en' + ? 'The classic S-shaped activation function' + : '็ปๅ ธ็Sๅฝขๆฟๆดปๅฝๆฐ'} +
+ + + ++ {language === 'en' + ? 'Hyperbolic tangent - Zero-centered activation' + : 'ๅๆฒๆญฃๅ - ้ถไธญๅฟๆฟๆดป'} +
+ + + ++ {language === 'en' + ? 'Sigmoid Linear Unit - The Swish activation' + : 'Sigmoid็บฟๆงๅๅ - Swishๆฟๆดป'} +
+ + + ++ {language === 'en' + ? 'Swish-Gated Linear Unit - Advanced activation' + : 'Swish้จๆง็บฟๆงๅๅ - ้ซ็บงๆฟๆดป'} +
+ + + ++ {language === 'en' + ? 'Multi-class classification activation function' + : 'ๅค็ฑปๅ็ฑปๆฟๆดปๅฝๆฐ'} +
+ ++ {language === 'en' ? 'Build neural networks from the ground up' : 'ไปๅคดๆๅปบ็ฅ็ป็ฝ็ป'} +
++ {language === 'en' + ? 'Understanding neural network structure and design' + : '็่งฃ็ฅ็ป็ฝ็ป็ปๆๅ่ฎพ่ฎก'} +
+ + + ++ {language === 'en' + ? 'Constructing individual network layers' + : 'ๆๅปบๅไธช็ฝ็ปๅฑ'} +
+ + + ++ {language === 'en' + ? 'Putting together a complete neural network' + : '็ป่ฃ ๅฎๆด็็ฅ็ป็ฝ็ป'} +
+ + + ++ {language === 'en' + ? 'Mathematical foundation of backpropagation' + : 'ๅๅไผ ๆญ็ๆฐๅญฆๅบ็ก'} +
+ + + ++ {language === 'en' + ? 'Computing derivatives for network training' + : '่ฎก็ฎ็ฝ็ป่ฎญ็ป็ๅฏผๆฐ'} +
+ + + ++ {language === 'en' + ? 'Understanding the backpropagation algorithm' + : '็่งฃๅๅไผ ๆญ็ฎๆณ'} +
+ + + ++ {language === 'en' + ? 'Coding the backpropagation algorithm from scratch' + : 'ไป้ถๅผๅง็ผๅๅๅไผ ๆญ็ฎๆณ'} +
+ ++ {language === 'en' ? 'Understanding attention and self-attention' : '็่งฃๆณจๆๅๅ่ชๆณจๆๅ'} +
++ {language === 'en' + ? 'Understanding the attention mechanism' + : '็่งฃๆณจๆๅๆบๅถ'} +
+ + + ++ {language === 'en' + ? 'Building self-attention from the ground up' + : 'ไป้ถๅผๅงๆๅปบ่ชๆณจๆๅ'} +
+ + + ++ {language === 'en' + ? 'Computing query-key-value similarities' + : '่ฎก็ฎๆฅ่ฏข-้ฎ-ๅผ็ธไผผๅบฆ'} +
+ + + ++ {language === 'en' + ? 'Using attention scores to weight values' + : 'ไฝฟ็จๆณจๆๅๅๆฐๅ ๆๅผ'} +
+ + + ++ {language === 'en' + ? 'Parallel attention mechanisms' + : 'ๅนถ่กๆณจๆๅๆบๅถ'} +
+ + + ++ {language === 'en' + ? 'Implementing attention mechanisms in Python' + : '็จPythonๅฎ็ฐๆณจๆๅๆบๅถ'} +
+ ++ {language === 'en' ? 'Feedforward networks and Mixture of Experts' : 'ๅ้ฆ็ฝ็ปๅไธๅฎถๆททๅ'} +
++ {language === 'en' + ? 'Understanding transformer feedforward networks' + : '็่งฃTransformerๅ้ฆ็ฝ็ป'} +
+ + + ++ {language === 'en' + ? 'Introduction to MoE architecture' + : 'MoEๆถๆไป็ป'} +
+ + + ++ {language === 'en' + ? 'Understanding individual expert networks' + : '็่งฃๅไธชไธๅฎถ็ฝ็ป'} +
+ + + ++ {language === 'en' + ? 'Routing and gating mechanisms in MoE' + : 'MoEไธญ็่ทฏ็ฑๅ้จๆงๆบๅถ'} +
+ + + ++ {language === 'en' + ? 'Merging multiple expert outputs' + : 'ๅๅนถๅคไธชไธๅฎถ่พๅบ'} +
+ + + ++ {language === 'en' + ? 'Integrating mixture of experts in transformers' + : 'ๅจTransformerไธญ้ๆไธๅฎถๆททๅ'} +
+ + + ++ {language === 'en' + ? 'Implementing mixture of experts in Python' + : '็จPythonๅฎ็ฐไธๅฎถๆททๅ'} +
+ + + ++ {language === 'en' + ? 'DeepSeek\'s advanced MLP architecture' + : 'DeepSeek็้ซ็บงMLPๆถๆ'} +
+ ++ {language === 'en' ? 'Complete transformer implementation from scratch' : 'ไป้ถๅผๅงๅฎๆดๅฎ็ฐTransformer'} +
++ {language === 'en' + ? 'Understanding the complete transformer structure' + : '็่งฃๅฎๆด็Transformer็ปๆ'} +
+ + + ++ {language === 'en' + ? 'Rotary position embeddings for transformers' + : 'Transformer็ๆ่ฝฌไฝ็ฝฎๅตๅ ฅ'} +
+ + + ++ {language === 'en' + ? 'Constructing individual transformer layers' + : 'ๆๅปบๅไธชTransformerๅฑ'} +
+ + + ++ {language === 'en' + ? 'Output projection and prediction head' + : '่พๅบๆๅฝฑๅ้ขๆตๅคด'} +
+ + + ++ {language === 'en' + ? 'Complete transformer implementation' + : 'ๅฎๆด็Transformerๅฎ็ฐ'} +
+ + + ++ {language === 'en' + ? 'Training process and optimization' + : '่ฎญ็ป่ฟ็จๅไผๅ'} +
+ +