Welcome to the LangChain LLM tutorial! This guide is designed to help you learn LangChain step by step, at a slow and deliberate pace. Rushing often leads to mistakes, and we want to ensure you have a solid understanding at every stage.
LangChain is a powerful framework for building applications using large language models (LLMs). However, due to its complexity, it’s easy to become overwhelmed if you try to move too quickly. This tutorial is intentionally structured to:
- Emphasize gradual learning.
- Break down complex topics into manageable steps.
- Provide clear examples and exercises.
We will use LangChain only, no LangGraph nor LangSmith
(...at the time of writing this I don't even know what they do 😂)
-
Talk to the LLM
01.llm_template.ipynb: Ask simple questions to a LLM -
Vectors and embedding
02.embeddings_vectors.ipynb: How to use Embedding we can use for RAG's -
Classification
03.classify.ipynb: Classify a message using pydantic object and ChatPromptTemplate -
Extraction
04.extraction.ipynb: Same but for structured data extraction -
Tools calling
05.tool_calling.ipynb: Tool calling, also known as function calling, enables AI models to interact with systems like APIs or databases by responding in a schema-specific format -
Short-Term Memory
06.short_memory.ipynb: The code creates a chatbot with short-term memory using ChatOllam. -
Long-Term Memory
07.long_memory.ipynb: This code builds a chatbot that remembers and retrieves past conversations using a database, allowing it to provide personalized and context-aware responses. -
Runnable
08.runnable.ipynb: This code demonstrates the Runnable concept by building a pipeline of chained transformations—adding context, generating a response with a language model, and post-processing the output—showcasing how LangChain organizes and executes workflows efficiently. -
Agent
09.agent.ipynb: An agent dynamically selects tools and actions based on input, offering flexibility, efficiency, and scalability, while a pipeline processes inputs linearly through predefined steps without decision-making.
To be fully independant and be able to practice anywhere, I'm using two small Docker containers:
- One for Ollama with a couple of models (llama,)
- One as my development container with Python 3.12 and the LangChain libraries I need
The models I use:
NAME ID SIZE MODIFIED
phi3:14b cf611a26b048 7.9 GB 2 days ago
mxbai-embed-large:latest 468836162de7 669 MB 5 days ago
mistral:latest f974a74358d6 4.1 GB 9 days ago
llama3.2:latest a80c4f17acd5 2.0 GB 11 days ago
Then, using VSCode, you can "Attach Visual Studio Code" and run all examples with Jupyter plugins
... and you're the king of the world
- We'll use Ollama to avoid expenses associated with LLM like GPT. No need API keys nor Credit Card.
- Results may be a bit less accurate than GPT but we wanna learn first.
- The version of LangChain installed should be 0.3+ (important)
- Ollama is easy to install using Docker. (Better to have a GPU, even a small one)
- I give an example of Dockerfile and docker-compose.yaml using Python 3.12 (I have to test this...)
- You can then use an evironment variable OLLAMA_SERVER (ex. OLLAMA_SERVER='http://ollama:11434') from this container running Jupyter
- Required Python packages from
requirements.txt
file should be build with the Dockerfile.
A simple docker-compose up -d
should do the trick
- Take Your Time: Resist the urge to skip steps or rush through sections.
- Experiment: Modify the examples and test your ideas.
- Ask Questions: Engage with the LangChain community if you encounter issues.
This is a community-driven effort, and your feedback is valuable. If you find errors or have suggestions for improvement, please submit an issue or pull request.
Happy Learning!