Skip to content

vekovius/Text-to-LLM-CSC4330

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Text-to-LLM Telegram Bot

A production-ready FastAPI backend that connects Telegram users to LLM providers (OpenAI, Anthropic Claude, xAI Grok). Built for CSC4330 Software Systems Design.

Architecture

User → Telegram Bot → FastAPI Server (/webhook) → LLM API → FastAPI Server → Telegram Bot → User

Features

  • Multi-Provider Support: Easily switch between OpenAI, Anthropic, and xAI
  • Webhook-Based: Efficient real-time message processing
  • Production Ready: Includes logging, error handling, and health checks
  • Type-Safe: Full Pydantic models for data validation
  • Easy Deployment: Simple environment variable configuration

Project Structure

/project-root
│
├── app/
│   ├── __init__.py
│   ├── main.py                  # FastAPI server with /webhook endpoint
│   ├── config.py                # Environment variable configuration
│   ├── telegram_controller.py   # Telegram API interactions
│   ├── llm_service.py           # LLM provider integrations
│   ├── models/
│   │   ├── __init__.py
│   │   └── message.py           # Pydantic models
│   └── utils/
│       ├── __init__.py
│       └── logger.py            # Logging utilities
│
├── requirements.txt
├── README.md
└── .env                         # Create this from .env.example

Setup

1. Prerequisites

  • Python 3.8+
  • Telegram Bot Token (from @BotFather)
  • LLM API Key (OpenAI, Anthropic, or xAI)

2. Installation

Clone the repository:

git clone https://github.com/vekoLSU/Text-to-LLM-CSC4330.git
cd Text-to-LLM-CSC4330

Install dependencies:

pip install -r requirements.txt

3. Configuration

Create a .env file in the root directory with the following variables:

# Telegram Bot Configuration
TELEGRAM_BOT_TOKEN=your_telegram_bot_token_here

# LLM Configuration
LLM_PROVIDER=openai          # Options: openai, anthropic, xai
LLM_API_KEY=your_llm_api_key_here
LLM_MODEL=gpt-4o-mini        # See model options below

# Server Configuration (Optional)
HOST=0.0.0.0
PORT=8000
MAX_TOKENS=1000
DEBUG=False

Supported Models

OpenAI:

  • gpt-4o-mini
  • gpt-4o
  • gpt-4-turbo
  • gpt-3.5-turbo

Anthropic:

  • claude-3-5-sonnet-20241022
  • claude-3-opus-20240229
  • claude-3-haiku-20240307

xAI:

  • grok-beta

4. Getting API Keys

Telegram Bot Token:

  1. Message @BotFather on Telegram
  2. Send /newbot and follow instructions
  3. Copy the token provided

OpenAI API Key:

Anthropic API Key:

xAI API Key:

Running the Server

Local Development

Start the server with auto-reload:

uvicorn app.main:app --reload

Or run directly:

python -m uvicorn app.main:app --reload

The server will start at http://localhost:8000

Production

Run without reload:

uvicorn app.main:app --host 0.0.0.0 --port 8000

Setting Up Telegram Webhook

Your server must be publicly accessible via HTTPS. Once deployed:

Method 1: Browser

Navigate to:

https://api.telegram.org/bot<YOUR_BOT_TOKEN>/setWebhook?url=https://yourdomain.com/webhook

Replace:

  • <YOUR_BOT_TOKEN> with your actual bot token
  • yourdomain.com with your server's domain

Method 2: API Endpoint

Send a POST request to your server:

curl -X POST "http://localhost:8000/set-webhook" \
  -H "Content-Type: application/json" \
  -d '{"webhook_url": "https://yourdomain.com/webhook"}'

Method 3: Using curl directly

curl -X POST "https://api.telegram.org/bot<YOUR_BOT_TOKEN>/setWebhook" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://yourdomain.com/webhook"}'

Verify Webhook Status

https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getWebhookInfo

Testing

1. Health Check

curl http://localhost:8000/health

Expected response:

{"status": "healthy"}

2. Test Webhook Locally

Send a test webhook payload:

curl -X POST "http://localhost:8000/webhook" \
  -H "Content-Type: application/json" \
  -d '{
    "update_id": 123456789,
    "message": {
      "message_id": 1,
      "from": {
        "id": 123456,
        "is_bot": false,
        "first_name": "Test"
      },
      "chat": {
        "id": 123456,
        "type": "private"
      },
      "date": 1234567890,
      "text": "Hello, bot!"
    }
  }'

3. Test with Telegram

  1. Find your bot on Telegram using its username
  2. Send it a message
  3. Check server logs to see the request processing

Switching LLM Providers

To switch between providers, simply update your .env file:

For OpenAI:

LLM_PROVIDER=openai
LLM_API_KEY=sk-...
LLM_MODEL=gpt-4o-mini

For Anthropic:

LLM_PROVIDER=anthropic
LLM_API_KEY=sk-ant-...
LLM_MODEL=claude-3-5-sonnet-20241022

For xAI:

LLM_PROVIDER=xai
LLM_API_KEY=xai-...
LLM_MODEL=grok-beta

Then restart the server:

uvicorn app.main:app --reload

API Endpoints

GET /

Health check and service info

GET /health

Simple health check endpoint

POST /webhook

Main webhook endpoint for Telegram updates

POST /set-webhook

Helper endpoint to programmatically set Telegram webhook

Request:

{
  "webhook_url": "https://yourdomain.com/webhook"
}

Logging

Logs are output to stdout with the following format:

2024-01-15 10:30:45 - text-to-llm - INFO - Message received from chat 123456

View logs in real-time:

uvicorn app.main:app --reload | tee bot.log

About

Text to llm system with multi-platform compatibility

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages