Skip to content

Update community/autonomous_5g_slicing_lab with a new Graphana UI #320

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -45,10 +45,25 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "fffa0783-c270-4ed6-9b4b-1f5040341f4f",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"Enter your API Key: ········\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"✅ API Key successfully saved to config.yaml\n"
]
}
],
"source": [
"import yaml\n",
"from getpass import getpass\n",
Expand Down Expand Up @@ -96,13 +111,15 @@
"source": [
"### 5 Agentic LLMs for 5G Section\n",
"\n",
"Once you have the **5G Lab GitHub** repository cloned and the API Key and Kinetica passwords configured, you can proceed to the **Agentic LLMs** section. This part of the lab demonstrates how to deploy an agentic workflow to monitor network performance and dynamically adjust bandwidth allocation.\n",
"Once you have the **5G Lab GitHub** repository cloned and the API Key and Kinetica passwords configured, you can proceed to install the lab by following these three sections.\n",
"\n",
"- **Part A – Setup of 5G Lab environment** \n",
" Located at: `./autonomous5g_slicing_lab/llm-slicing-5g-lab/DLI_Lab_Setup.ipynb` \n",
" Provides instructions to set up a 5G Network Software Stack in your environemnt.\n",
"\n",
"- **Part B – 5G Network Agent Workflow** \n",
"- **Part B – Setup Graphana environment** \n",
" Located at: `./autonomous5g_slicing_lab/agentic-llm/README_GRAPHANA.md` \n",
" Provides instructions to set up a Graphana environment for visualiztaion.\n",
"- **Part C – 5G Network Agent Workflow** \n",
" Located at: `./autonomous5g_slicing_lab/agentic-llm/agentic_pipeline_DLI.ipynb` \n",
" Explains the agentic pipeline in **LangGraph** for managing 5G network slicing and bandwidth allocation.\n"
]
Expand Down
9 changes: 8 additions & 1 deletion community/autonomous_5g_slicing_lab/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ To run the lab, ensure you have:
- Kinetica database access (credentials in `.env` file).
- **Dependencies**: Listed in `requirements.txt` in the respective directories.

## Lab Setup
## 5G Lab Setup

The lab setup configures a fully functional 5G network simulation environment. To set up the lab, first you will need to run autonomous_5g_slicing_lab/Automatic_5G_Network_Lab_Setup.ipynb to configure your environment, and then you will need to run the Jupyter notebook located at `autonomous_5g_slicing_lab/llm-slicing-5g-lab/DLI_Lab_Setup.ipynb`. The notebook automates the following steps:

Expand All @@ -44,6 +44,12 @@ In summary, to start your lab, you need to follow these steps:
1. **Open 'autonomous_5g_slicing_lab/Automatic_5G_Network_Lab_Setup.ipynb' in the main directory and set up your environment keys
2. **Open 'autonomous_5g_slicing_lab/llm-slicing-5g-lab/DLI_Lab_Setup.ipynb and set up your 5G Network Environment

## Running Graphana Dashboard

This lab uses Graphana for visualization. You need to install the Graphana environment with the following instructions
1. Open the `autonomous_5g_slicing_lab/agentic-llm/REAME_GRAPHANA.md' and follow the instructions.
Ensure you verify the Graphana environment, and you save the GRAPHANA_DASHBOARD variable in the `autonomous_5g_slicing_lab/agentic-llm/confg.yaml file.

## Running the Agentic Workflow

Once the lab is set up, you can run the agentic workflow to monitor network performance and dynamically adjust bandwidth allocation. The workflow uses a LangGraph-based agent to analyze Kinetica database logs and optimize slice configurations.
Expand Down Expand Up @@ -84,6 +90,7 @@ After running the shutdown notebook, you can restart the lab by re-running `DLI_
- `.env`: Environment variables (e.g., Kinetica credentials).
- `logs/`: Directory for log files (created during execution).
- `autonomous_5g_slicing_lab/agentic-llm/`:
- `README_GRAPHANA.md `: Markdown file to install and setup Graphana.
- `agentic_pipeline-DLI.ipynb`: Notebook for running the LangGraph agent UI.
- `requirements.txt`: Python dependencies for the agentic workflow.

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# 5G Network Agent: Grafana & InfluxDB Integration Guide

This guide explains how to set up and use the enhanced real-time metrics dashboard for the 5G Network Agent, using **Grafana** and **InfluxDB** for professional, interactive visualization. It is tailored for users starting from the original [NVIDIA/GenerativeAIExamples](https://github.com/NVIDIA/GenerativeAIExamples/tree/main) repository, and reflects all recent changes and improvements.

---

## 🚀 Overview: What's New

- **Grafana Dashboards**: Interactive, real-time time-series visualizations
- **InfluxDB**: High-performance time-series database for metrics storage
- **Automated Docker Setup**: One-command startup for Grafana & InfluxDB
- **Streamlit UI**: Embedded Grafana dashboard in the main app
- **Test & Utility Scripts**: Easy verification and troubleshooting

---

## 📁 File Structure (Grafana Integration)

```
agentic-llm/
├── chatbot_DLI.py # Main Streamlit app (Grafana embedded)
├── influxdb_utils.py # InfluxDB client utility (metrics API)
├── test_influxdb.py # Script to test InfluxDB connectivity
├── docker-compose.yml # Docker Compose for Grafana & InfluxDB
├── start_grafana_services.sh # Linux/Mac startup script
├── start_grafana_services.bat # Windows startup script
├── requirements_grafana.txt # Python dependencies
├── grafana/
│ ├── provisioning/
│ │ ├── datasources/
│ │ │ └── influxdb.yaml # InfluxDB datasource config
│ │ └── dashboards/
│ │ └── dashboard.yaml # Dashboard provisioning config
│ └── dashboards/
│ └── 5g-metrics-dashboard.json # Dashboard definition (edit here)
├── config.yaml # App configuration (log file paths, etc.)
└── README_GRAFANA.md # This guide
```

---

## 🛠️ Setup Instructions

### 1. Prerequisites
-** Install docker compose if not installed
```bash
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose version
```

- **Python 3.8+**

### 2. Install Python Dependencies

```bash
pip install -r requirements_grafana.txt
```

### 3. Start Grafana & InfluxDB Services

**On Linux/Mac:**
```bash
chmod +x start_grafana_services.sh
./start_grafana_services.sh
```

**On Windows:**
```cmd
start_grafana_services.bat
```

This will:
- Stop any existing containers
- Start Grafana (http://localhost:9002) and InfluxDB (http://localhost:9001)
- Provision the dashboard and datasource automatically

### 4. Verify Services
- Graphana Services are running in the following ports. Please make sure you are exposing these ports in your environment.
- **Grafana**: [http://localhost:9002](http://localhost:9002) (Press "Skip" to avoid the user and password authentication)
- **InfluxDB**: [http://localhost:9001](http://localhost:9001)

### 5. Get Variable for Dashboard in Graphana
- **Go to **Grafana**: [http://localhost:9002](http://localhost:9002)
- **Get the letter combination for your created dashboard.
E.g. https://9002-3yqhu0mm9.brevlab.com/?orgId=1&from=now-6h&to=now&timezone=browser - letter combination is 3yqhu0mm9 and store it in the file config.yaml under /autonomous_5g_slicing_lab/agentic-llm
- See the picture showing how to get the number

![Alt text](/agentic-llm/images/graphana.png)

Original file line number Diff line number Diff line change
@@ -0,0 +1,225 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Nvidia Logo](./images/nvidia.png) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5G Network Configuration Agent\n",
"\n",
"### Overview \n",
"This notebook outlines how the 5G network configuration works—how it detects SDU buffer full errors and reconfigures the network. We will use concepts demonstrated in [intro_agents.ipynb](intro_agents.ipynb) to build this agent using LangGraph and LangChain.\n",
"\n",
"### Table of Contents\n",
"1. Architecture Overview\n",
"2. File Descriptions\n",
"3. Define and run the Agent\n",
"4. Streamlit UI implementation\n",
"\n",
"### 1. Architecture Overview\n",
"\n",
"![Architecture diagram](./images/architecture_diagram.png) \n",
"\n",
"#### Key Components: \n",
"\n",
"**Agents**:\n",
"1. **Monitoring Agent**: \n",
" - Continuously reads gNodeB logs from `../llm-slicing-5g-lab/logs/gNodeB.log`. \n",
" - Analyzes each chunk for SDU buffer full errors. \n",
" - Routes to the Configuration Agent if an error is detected. \n",
"\n",
"2. **Configuration Agent**: \n",
" - Called when an error is detected in the gNodeB logs. \n",
" - Has two tools bound to it: `get_packet_loss` and `reconfigure_network`. \n",
" - First, retrieves the latest packet loss logs from the database using the `get_packet_loss` tool. \n",
" - Analyzes the logs and determines which UE needs more bandwidth. Based on this, it assigns higher bandwidth to the selected UE. \n",
" - Calls the `reconfigure_network` tool to use xAPP and reconfigure the network. \n",
" - Returns control to the Monitoring Agent to continue monitoring. \n",
"\n",
"**Tools**:\n",
"1. **`get_packet_loss`**: Queries the database and retrieves a DataFrame containing per-UE packet loss statistics. \n",
"2. **`reconfigure_network`**: Calls the xAPP with optimized slicing parameters to adjust network configurations. \n",
"\n",
"#### Example Error Logs \n",
"\n",
"```md\n",
"[RLC] /home/nvidia/llm-slicing-5g-lab/openairinterface5g/openair2/LAYER2/nr_rlc/nr_rlc_entity_am.c:1769:nr_rlc_entity_am_recv_sdu: warning: 195 SDU rejected, SDU buffer full\n",
" [NR_MAC] Frame.Slot 896.0\n",
" UE RNTI c1f9 CU-UE-ID 1 in-sync PH 0 dB PCMAX 0 dBm, average RSRP -44 (16 meas)\n",
" UE c1f9: UL-RI 1, TPMI 0\n",
" UE c1f9: dlsch_rounds 23415/1/0/0, dlsch_errors 0, pucch0_DTX 0, BLER 0.00000 MCS (0) 28\n",
" UE c1f9: ulsch_rounds 8560/0/0/0, ulsch_errors 0, ulsch_DTX 0, BLER 0.00000 MCS (0) 9\n",
" UE c1f9: MAC: TX 177738642 RX 612401 bytes\n",
" UE c1f9: LCID 1: TX 1022 RX 325 bytes\n",
"```\n",
" \n",
"### 2. Files to Refer \n",
"\n",
"- **[agents.py](./agents.py)** – Contains code for Monitoring and Configuration Agents. \n",
"- **[tools.py](./tools.py)** – Implements the tools used by the agents. \n",
"- **[langgraph_agent.py](./langgraph_agent.py)** – Defines the LangGraph agent workflow. \n",
"- **[chatbot_DLI.py](./chatbot_DLI.py)** – Implementation for the Streamlit UI. \n",
"\n",
"\n",
"#### Expected Output \n",
"\n",
"By the end of this notebook, you will have: \n",
"- A functional LangGraph workflow connected to the 5g slicing lab, that detects network issues and triggers reconfiguration. \n",
"- A pipeline capable of analyzing logs, querying packet loss data, and adjusting slicing parameters dynamically. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Creating a LangGraph Workflow "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We have defined two agents—the **Monitoring Agent** and the **Configuration Agent**—as combinations of a model and the tool(s) they have access to. This is achieved using LangGraph's `create_react_agent()` function, which creates an agent that employs ReAct prompting.\n",
"\n",
"**States in Graph** \n",
" - A state represents the evolving context of execution, maintaining data across multiple steps. \n",
" - It stores intermediate results, tool outputs, and agent decisions. \n",
" - States enable reasoning over past interactions, ensuring continuity in the workflow. \n",
"\n",
"**Nodes and Edges in LangGraph** \n",
" - **Nodes** represent agents, tool calls, or decision steps in the workflow. \n",
" - **Edges** define the flow between nodes, determining execution order based on conditions. \n",
" - This structure allows dynamic decision-making and parallel execution where needed. \n",
"\n",
"Refer [this](https://langchain-ai.github.io/langgraph/concepts/low_level/) for more information.\n",
"\n",
"The workflow has been defined in [langgraph_agent.py](langgraph_agent.py), please refer it for implementation details. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Running the Streamlit User Interface\n",
"\n",
"We provide a predefined Streamlit-based user interface for monitoring the system in real time. This interface allows users to interact with the monitoring software efficiently and gain insights into its operation.\n",
"\n",
"#### About Streamlit:\n",
"[Streamlit](https://streamlit.io/) is a lightweight Python framework for building interactive web applications with minimal effort. It enables users to create and deploy data-driven dashboards and tools using simple Python scripts.\n",
"\n",
"#### Features of the UI:\n",
"- Real-time Log Monitoring – View live logs generated by the agent.\n",
"- Packet Loss Visualization – Monitor real-time packet loss of UE1 and UE2 using dynamic charts.\n",
"- Agent Control – Start and stop the agent directly through the UI."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"application/javascript": [
"var url = 'http://'+window.location.host+'/dashboard';\n",
"element.innerHTML = '<a style=\"color:#76b900;\" target=\"_blank\" href='+url+'><h2>< Link To Streamlit Frontend ></h2></a>';\n"
],
"text/plain": [
"<IPython.core.display.Javascript object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"%%js\n",
"var url = 'http://'+window.location.host+'/dashboard';\n",
"element.innerHTML = '<a style=\"color:#76b900;\" target=\"_blank\" href='+url+'><h2>< Link To Streamlit Frontend ></h2></a>';"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Collecting usage statistics. To deactivate, set browser.gatherUsageStats to false.\n",
"\u001b[0m\n",
"\u001b[0m\n",
"\u001b[34m\u001b[1m You can now view your Streamlit app in your browser.\u001b[0m\n",
"\u001b[0m\n",
"\u001b[34m Local URL: \u001b[0m\u001b[1mhttp://localhost:8501\u001b[0m\n",
"\u001b[34m Network URL: \u001b[0m\u001b[1mhttp://172.27.20.152:8501\u001b[0m\n",
"\u001b[34m External URL: \u001b[0m\u001b[1mhttp://204.52.27.230:8501\u001b[0m\n",
"\u001b[0m\n"
]
}
],
"source": [
"!~/.local/bin/streamlit run chatbot_DLI.py --server.enableCORS=false --server.enableXsrfProtection=false"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Running Langgraph Agent"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The streamlit UI calls langgraph_agent.py in the background. The agent logs its outputs to agent.log, which are in turn displayed on the UI. You may run the script to see how the agent works. Log files are written to in the `/llm-slicing-5g-lab/logs` directory. Run the following commands in separate terminals to stream logs for agent, UE1 and UE2 respectively.\n",
"\n",
"```sh\n",
"tail -f /llm-slicing-5g-lab/logs/agent.log\n",
"tail -f /llm-slicing-5g-lab/logs/UE2_iperfc.log\n",
"tail -f /llm-slicing-5g-lab/logs/UE1_iperfc.log\n",
"```\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!python3 langgraph_agent.py"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Loading