Skip to content

Latest commit

Β 

History

History
268 lines (153 loc) Β· 8.7 KB

File metadata and controls

268 lines (153 loc) Β· 8.7 KB

πŸ“˜ SYLLABUS β€” Building Intelligence

AI Systems Engineering: From Machine Learning to Generative Intelligence

πŸŽ“ Course Information

Program: Building Intelligence β€” AI Systems Engineering Repository: DeepRatAI/EducativeMaterial Level: Intermediate to Advanced Estimated Duration: 6–9 months (10–15 hours per week) Mode: Self-paced, hands-on learning Language: English (with Spanish version available) Last Update: November 2025


🎯 Overview

Building Intelligence is a complete journey through modern Artificial Intelligence β€” from classical Machine Learning to advanced Generative AI, LLMs, and Agentic Systems. The program emphasizes practical implementation (70%) and conceptual depth (30%), blending theory, mathematics, and real-world projects.

Who Is This For?

  • Developers aiming to specialize in ML/AI
  • Data scientists expanding into deep learning
  • Technical professionals curious about Generative AI
  • Computer science and engineering students
  • Researchers implementing AI systems

Prerequisites

Required Knowledge:

  • βœ… Intermediate Python (functions, classes, modules)
  • βœ… Basic math (algebra, calculus, probability)
  • βœ… Familiarity with NumPy, Pandas, and Jupyter Notebooks

Recommended Knowledge:

  • πŸ“Š Descriptive and inferential statistics
  • πŸ“ˆ Data visualization (Matplotlib, Seaborn)
  • 🐍 Object-oriented programming
  • πŸ”§ Basic Git and GitHub

πŸ“š Program Structure

The program is divided into 5 Phases with 15 progressive Modules:

Phase 1: Foundations of Machine Learning and Neural Thinking

Fundamentals of classical ML and the first steps into neural reasoning with Keras.

Phase 2: Building Neural Intelligence with PyTorch

Mastering PyTorch and deep architectures for vision and language.

Phase 3: Generative Intelligence β€” From Data to Language

Generative architectures, Transformers, and language modeling.

Phase 4: Fine-Tuning and Adaptation Engineering

Modern fine-tuning and optimization techniques for LLMs.

Phase 5: Agentic AI and Cognitive Systems

RAG systems, LangChain, AI Agents, and full end-to-end capstone projects.


πŸ“– Detailed Module Breakdown


Module 1: Teaching Machines to Think β€” The Python Approach

Duration: 4–5 weeks | Level: Intermediate

Learn the language of data and build your first predictive systems in Python. Cover supervised and unsupervised learning, model evaluation, and hands-on implementations with scikit-learn.


Module 2: Neural Foundations β€” Learning Through Keras

Duration: 3–4 weeks | Level: Intermediate

Understand how neural networks learn. Explore layers, activations, losses, and optimizers. Build regression and classification models with Keras and experiment with CNNs and RNNs.


Module 3: Deep Vision and Sequential Intelligence β€” TensorFlow in Action

Duration: 4–5 weeks | Level: Advanced

Combine TensorFlow and Keras to build custom models, advanced CNNs, and Transformer architectures. Includes projects in medical imaging, time series forecasting, and image generation.


Module 4: PyTorch Fundamentals β€” Thinking in Tensors

Duration: 3–4 weeks | Level: Intermediate

Master PyTorch fundamentals: tensors, autograd, optimizers, and training loops. Implement your first neural networks from scratch and train them with gradient descent.


Module 5: Building Deep Intelligence with PyTorch

Duration: 4–5 weeks | Level: Advanced

Design deeper networks with PyTorch, integrate regularization and normalization techniques, and build CNNs for complex vision tasks with transfer learning.


Module 6: Capstone Project I β€” Applied Deep Learning Systems

Duration: 3–4 weeks | Level: Advanced

Your first integrative project: build a geospatial image classification system using CNNs, Vision Transformers, and transfer learning pipelines.


Module 7: Foundations of Generative Intelligence and LLMs

Duration: 4–5 weeks | Level: Advanced

Explore the world of generative architectures (RNNs, VAEs, GANs, Diffusion Models, Transformers) and learn how to prepare and tokenize data for training large language models.


Module 8: Language Understanding and Representation Learning

Duration: 4–5 weeks | Level: Advanced

Build word embeddings (Word2Vec, GloVe, FastText), implement RNNs for text, and fine-tune BERT for real-world NLP tasks like sentiment analysis and entity recognition.


Module 9: Transformers and Generative Language Modeling

Duration: 5–6 weeks | Level: Advanced

Implement a complete Transformer from scratch, develop GPT-style and T5-style models, and explore advanced decoding techniques such as beam search, nucleus sampling, and temperature scaling.


Module 10: Fine-Tuning Transformers β€” Engineering Adaptive Models

Duration: 4–5 weeks | Level: Expert

Learn LoRA, QLoRA, and other Parameter-Efficient Fine-Tuning methods. Optimize training memory, speed, and performance with the Hugging Face ecosystem.


Module 11: Alignment and Optimization β€” RLHF, DPO & Beyond

Duration: 4–5 weeks | Level: Expert

Implement Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO). Align model behavior with human preferences and evaluate model ethics and safety.


Module 12: Retrieval-Augmented Generation β€” Building Knowledge-Aware Systems

Duration: 3–4 weeks | Level: Expert

Design RAG systems that combine LLMs with vector databases. Implement retrievers, rankers, and evaluators to build knowledge-driven generative systems.


Module 13: LangChain and Agentic Intelligence β€” Orchestrating AI Systems

Duration: 3–4 weeks | Level: Expert

Learn LangChain to orchestrate AI Agents, tools, and memory. Develop multi-agent workflows and cognitive architectures for research, automation, and conversational intelligence.


Module 14: Advanced RAG Engineering and Production Intelligence

Duration: 3–4 weeks | Level: Expert

Implement HyDE, Self-RAG, and Corrective RAG for enterprise-grade reliability. Optimize retrieval accuracy, latency, and scalability for production deployments.


Module 15: Capstone Project II β€” End-to-End Generative AI Application

Duration: 4–6 weeks | Level: Expert

Final project: build a full production-ready Generative AI system integrating LLMs, RAG, LangChain, and AI Agents. Deploy, document, and present your complete intelligent application.


πŸ“Š Teaching Methodology

The program follows a Project-Based Learning approach: every lesson blends theory with implementation. Each module includes:

  • 🧠 Conceptual Notebooks (theory)
  • πŸ’» Hands-on Exercises (practice)
  • ✏️ Challenges (self-evaluation)
  • βœ… Complete Solutions (reference and explanation)

πŸ“… Suggested Schedule

  • Full-Time (6 months): 40h/week, Modules 1–15
  • Part-Time (9 months): 15h/week, Modules 1–15
  • Flexible (12 months): 10h/week, Modules 1–15

All notebooks are optimized for Google Colab Free Tier (GPU T4). No local setup required.


🧠 Learning Outcomes

By the end of Building Intelligence, you will be able to:

  • Implement end-to-end ML and DL systems
  • Fine-tune and deploy LLMs efficiently
  • Design and evaluate RAG architectures
  • Build and orchestrate AI Agents with LangChain
  • Communicate and document your projects professionally

πŸ“„ License & Acknowledgments

This educational repository is released under the MIT License for open, non-commercial use. It includes adapted materials from IBM Skills Network, Hugging Face, PyTorch Foundation, and open-source communities.


🧩 Key Resources


🎯 Career Outcomes

Upon completion, you’ll be prepared for roles such as:

  • πŸ€– Machine Learning Engineer
  • 🧠 Deep Learning Researcher
  • πŸ’¬ NLP Engineer
  • 🎨 Generative AI Specialist
  • 🧩 AI Systems Engineer

You will have the skills to design, train, and deploy intelligent systems that integrate reasoning, adaptation, and creativity.


Last Updated: November 2025 Version: 1.0 Maintained by: DeepRatAI

β€œFrom models to minds β€” let’s make intelligence open again.” β€” Gonzalo Romero (DeepRat)