A production-ready FastAPI backend that connects Telegram users to LLM providers (OpenAI, Anthropic Claude, xAI Grok). Built for CSC4330 Software Systems Design.
User → Telegram Bot → FastAPI Server (/webhook) → LLM API → FastAPI Server → Telegram Bot → User
- Multi-Provider Support: Easily switch between OpenAI, Anthropic, and xAI
- Webhook-Based: Efficient real-time message processing
- Production Ready: Includes logging, error handling, and health checks
- Type-Safe: Full Pydantic models for data validation
- Easy Deployment: Simple environment variable configuration
/project-root
│
├── app/
│ ├── __init__.py
│ ├── main.py # FastAPI server with /webhook endpoint
│ ├── config.py # Environment variable configuration
│ ├── telegram_controller.py # Telegram API interactions
│ ├── llm_service.py # LLM provider integrations
│ ├── models/
│ │ ├── __init__.py
│ │ └── message.py # Pydantic models
│ └── utils/
│ ├── __init__.py
│ └── logger.py # Logging utilities
│
├── requirements.txt
├── README.md
└── .env # Create this from .env.example
- Python 3.8+
- Telegram Bot Token (from @BotFather)
- LLM API Key (OpenAI, Anthropic, or xAI)
Clone the repository:
git clone https://github.com/vekoLSU/Text-to-LLM-CSC4330.git
cd Text-to-LLM-CSC4330Install dependencies:
pip install -r requirements.txtCreate a .env file in the root directory with the following variables:
# Telegram Bot Configuration
TELEGRAM_BOT_TOKEN=your_telegram_bot_token_here
# LLM Configuration
LLM_PROVIDER=openai # Options: openai, anthropic, xai
LLM_API_KEY=your_llm_api_key_here
LLM_MODEL=gpt-4o-mini # See model options below
# Server Configuration (Optional)
HOST=0.0.0.0
PORT=8000
MAX_TOKENS=1000
DEBUG=FalseOpenAI:
gpt-4o-minigpt-4ogpt-4-turbogpt-3.5-turbo
Anthropic:
claude-3-5-sonnet-20241022claude-3-opus-20240229claude-3-haiku-20240307
xAI:
grok-beta
Telegram Bot Token:
- Message @BotFather on Telegram
- Send
/newbotand follow instructions - Copy the token provided
OpenAI API Key:
- Get it from: https://platform.openai.com/api-keys
Anthropic API Key:
- Get it from: https://console.anthropic.com/settings/keys
xAI API Key:
- Get it from: https://console.x.ai/
Start the server with auto-reload:
uvicorn app.main:app --reloadOr run directly:
python -m uvicorn app.main:app --reloadThe server will start at http://localhost:8000
Run without reload:
uvicorn app.main:app --host 0.0.0.0 --port 8000Your server must be publicly accessible via HTTPS. Once deployed:
Navigate to:
https://api.telegram.org/bot<YOUR_BOT_TOKEN>/setWebhook?url=https://yourdomain.com/webhook
Replace:
<YOUR_BOT_TOKEN>with your actual bot tokenyourdomain.comwith your server's domain
Send a POST request to your server:
curl -X POST "http://localhost:8000/set-webhook" \
-H "Content-Type: application/json" \
-d '{"webhook_url": "https://yourdomain.com/webhook"}'curl -X POST "https://api.telegram.org/bot<YOUR_BOT_TOKEN>/setWebhook" \
-H "Content-Type: application/json" \
-d '{"url": "https://yourdomain.com/webhook"}'https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getWebhookInfo
curl http://localhost:8000/healthExpected response:
{"status": "healthy"}Send a test webhook payload:
curl -X POST "http://localhost:8000/webhook" \
-H "Content-Type: application/json" \
-d '{
"update_id": 123456789,
"message": {
"message_id": 1,
"from": {
"id": 123456,
"is_bot": false,
"first_name": "Test"
},
"chat": {
"id": 123456,
"type": "private"
},
"date": 1234567890,
"text": "Hello, bot!"
}
}'- Find your bot on Telegram using its username
- Send it a message
- Check server logs to see the request processing
To switch between providers, simply update your .env file:
For OpenAI:
LLM_PROVIDER=openai
LLM_API_KEY=sk-...
LLM_MODEL=gpt-4o-miniFor Anthropic:
LLM_PROVIDER=anthropic
LLM_API_KEY=sk-ant-...
LLM_MODEL=claude-3-5-sonnet-20241022For xAI:
LLM_PROVIDER=xai
LLM_API_KEY=xai-...
LLM_MODEL=grok-betaThen restart the server:
uvicorn app.main:app --reloadHealth check and service info
Simple health check endpoint
Main webhook endpoint for Telegram updates
Helper endpoint to programmatically set Telegram webhook
Request:
{
"webhook_url": "https://yourdomain.com/webhook"
}Logs are output to stdout with the following format:
2024-01-15 10:30:45 - text-to-llm - INFO - Message received from chat 123456
View logs in real-time:
uvicorn app.main:app --reload | tee bot.log