Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
175 changes: 175 additions & 0 deletions community-contributions/shabsi4u/LLM_tutor/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,175 @@
# LLM Tutor

An intelligent tutoring system that uses Large Language Models to provide structured, educational responses to user questions. The system employs a two-stage approach: first structuring the user's question using a meta-prompt, then generating a comprehensive educational response using system prompts and few-shot examples.

## Features

- **Two-Stage Processing**: Questions are first structured using a meta-prompt, then processed with educational context
- **Multiple Model Support**: Works with both GPT-4o-mini and Llama 3.2 models
- **Few-Shot Learning**: Uses example conversations to improve response quality
- **Streaming Support**: Real-time response generation for better user experience
- **Robust Error Handling**: Comprehensive error handling for file operations and API calls
- **Configurable Prompts**: Easy-to-modify system prompts and meta-prompts
- **Interactive Command Line**: Clean, user-friendly command-line interface

## Installation

1. Clone the repository:
```bash
git clone <repository-url>
cd LLM_tutor
```

2. Install dependencies:
```bash
pip install -r requirements.txt
```

Or using uv (recommended):
```bash
uv add openai python-dotenv frontmatter
```

3. Set up environment variables:
Create a `.env` file in the project root with your OpenAI API key:
```
OPENAI_API_KEY=your_api_key_here
```

## Project Structure

```
LLM_tutor/
├── llm_tutor.py # Main command-line application
├── prompts/
│ ├── system_prompt.txt # System prompt for educational responses
│ ├── meta_prompt.txt # Meta-prompt for question structuring
│ └── few_shots/ # Example conversations for few-shot learning
│ └── *.md # Markdown files with frontmatter
├── requirements.txt # Python dependencies
├── pyproject.toml # Project configuration
└── README.md # This file
```

## Usage

### Command Line Interface

Run the interactive tutor from the command line:

```bash
python llm_tutor.py
```

The program will:
1. Ask you to choose between GPT-4o-mini and Llama 3.2
2. Prompt you to enter your question
3. Structure your question using the meta-prompt
4. Generate an educational response with streaming output
5. Allow for follow-up questions in the same session

### Programmatic Usage

```python
from llm_tutor import LLMTutor, get_client

# Initialize the tutor
client = get_client("openai") # or "ollama"
tutor = LLMTutor('gpt-4o-mini', client)

# Get a structured question
question = "What is machine learning?"
structured_question = tutor.get_structured_question(question)

# Get the educational response
response = tutor.get_response(structured_question, stream=True)
print(response)
```

## Configuration

### Prompts

- **System Prompt** (`prompts/system_prompt.txt`): Defines the educational persona and response style
- **Meta Prompt** (`prompts/meta_prompt.txt`): Used to structure and clarify user questions
- **Few-Shot Examples** (`prompts/few_shots/*.md`): Example conversations with frontmatter containing user questions

### Few-Shot Examples Format

Create markdown files in `prompts/few_shots/` with the following structure:

```markdown
---
user: "What is the difference between supervised and unsupervised learning?"
---

Supervised learning uses labeled training data to learn a mapping from inputs to outputs, while unsupervised learning finds patterns in data without explicit labels. For example, supervised learning might learn to classify emails as spam or not spam using examples of each type, while unsupervised learning might group customers by purchasing behavior without knowing the "correct" groups in advance.
```

## API Requirements

- **OpenAI API Key**: Required for GPT-4o-mini model access
- **Ollama**: Required for local Llama 3.2 model access

## Quick Start

1. **Install dependencies**:
```bash
pip install -r requirements.txt
# or with uv: uv add openai python-dotenv frontmatter
```

2. **Set up your API key**:
Create a `.env` file with your OpenAI API key:
```
OPENAI_API_KEY=your_api_key_here
```

3. **Run the tutor**:
```bash
python llm_tutor.py
```

4. **Follow the prompts**:
- Choose your model (1 for OpenAI, 2 for Ollama)
- Enter your question
- Ask follow-up questions as needed

## Error Handling

The application includes comprehensive error handling for:
- Missing or invalid API keys
- File not found errors
- Permission errors
- Unicode decoding errors
- API request failures
- Keyboard interrupts

## Contributing

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests if applicable
5. Submit a pull request

## License

This project is licensed under the MIT License - see the LICENSE file for details.

## Troubleshooting

### Common Issues

1. **API Key Error**: Ensure your OpenAI API key is correctly set in the `.env` file
2. **File Not Found**: Check that all prompt files exist in the correct directories
3. **Permission Errors**: Ensure the application has read access to all files
4. **Model Not Available**: Verify that the selected model is available in your environment

### Getting Help

If you encounter issues:
1. Check the error messages for specific guidance
2. Verify your API keys and model availability
3. Ensure all required files are present
4. Check the project structure matches the expected layout
186 changes: 186 additions & 0 deletions community-contributions/shabsi4u/LLM_tutor/llm_tutor.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,186 @@
from dotenv import load_dotenv
import os
from openai import OpenAI
from pathlib import Path
import frontmatter

# Constants
MODEL_GPT = 'gpt-4o-mini'
MODEL_LLAMA = 'llama3.2'
SYSTEM_PROMPT_FILE = "prompts/system_prompt.txt"
META_PROMPT_FILE = "prompts/meta_prompt.txt"
FEW_SHOTS_PATH = "prompts/few_shots/"


def get_client(model_type="openai"):
"""
Get OpenAI client configured for either OpenAI API or local Ollama

Args:
model_type (str): Either "openai" or "ollama"

Returns:
OpenAI: Configured client instance
"""
if model_type == "ollama":
# Local Ollama configuration
client = OpenAI(
base_url="http://localhost:11434/v1", # Local Ollama API
api_key="ollama" # Dummy key, required by SDK
)
print("✅ Using local Ollama client")
return client

else: # Default to OpenAI
load_dotenv(override=True)
api_key = os.getenv("OPENAI_API_KEY")
if api_key and api_key.startswith('sk-proj-') and len(api_key) > 10:
# print("✅ API key looks good!")
pass
else:
print("⚠️ There might be a problem with your API key")
print("Make sure you have set OPENAI_API_KEY in your .env file or environment variables")
client = OpenAI(api_key=api_key)
print("✅ Using OpenAI client")
return client

def read_file(file_path):
try:
with open(file_path, 'r', encoding='utf-8') as f:
return f.read()
except FileNotFoundError:
print(f"Warning: File not found: {file_path}")
return ""
except PermissionError:
print(f"Error: Permission denied reading {file_path}")
return ""
except IsADirectoryError:
print(f"Error: {file_path} is a directory, not a file")
return ""
except UnicodeDecodeError as e:
print(f"Error: Unable to decode {file_path}: {e}")
return ""
except Exception as e:
print(f"Unexpected error reading {file_path}: {e}")
return ""

def get_system_prompt():
return read_file(SYSTEM_PROMPT_FILE)

def get_meta_prompt():
return read_file(META_PROMPT_FILE)

def get_few_shots():
messages = []
few_shots_dir = Path(FEW_SHOTS_PATH)

if not few_shots_dir.exists():
print(f"Warning: Few shots directory {FEW_SHOTS_PATH} does not exist")
return messages

for path in few_shots_dir.glob("*.md"):
try:
post = frontmatter.load(path)

# Validate required fields exist
if "user" not in post:
print(f"Warning: Missing 'user' field in {path}")
continue

if not post.content.strip():
print(f"Warning: Empty content in {path}")
continue

messages.append({"role": "user", "content": post["user"]})
messages.append({"role": "assistant", "content": post.content.strip()})

except Exception as e:
print(f"Error loading {path}: {e}")
continue

return messages

def build_messages(system_prompt, user_prompt, few_shots=None):
messages = [{"role": "system", "content": system_prompt}]
if few_shots:
messages.extend(few_shots)
messages.append({"role": "user", "content": user_prompt})
return messages

def get_response(model, client, messages, stream=False):
response = client.chat.completions.create(
model=model,
messages=messages,
stream=stream)
return response


class LLMTutor:
def __init__(self, model, client):
self.model = model
self.client = client

def get_structured_question(self, question):
meta_prompt = get_meta_prompt()
messages = build_messages(meta_prompt, question)
response = get_response(self.model, self.client, messages)
return response.choices[0].message.content

def get_response(self, structured_question, stream=False):
system_prompt = get_system_prompt()
few_shots = get_few_shots()
messages = build_messages(system_prompt, structured_question, few_shots)
response = get_response(self.model, self.client, messages, stream)

if stream:
# Handle streaming response
content = ""
for chunk in response:
if chunk.choices[0].delta.content is not None:
content += chunk.choices[0].delta.content
print(chunk.choices[0].delta.content, end="", flush=True)
print() # Add newline after streaming
return content
else:
# Handle non-streaming response
print(response.choices[0].message.content)
return response.choices[0].message.content

def main():
try:
print("Enter the model you want to use: [1] GPT-4o-mini, [2] Llama 3.2 (local)")
model_choice = input()
if model_choice == "1":
model = MODEL_GPT
client = get_client("openai")
elif model_choice == "2":
model = MODEL_LLAMA
client = get_client("ollama")
else:
print("Invalid model choice. Enter 1 for GPT-4o-mini or 2 for Llama 3.2 (local)")
return

question = input("Enter the question you want to ask: ")

llm_tutor = LLMTutor(model, client)
recur = True
while recur:
print("\nStructuring your question...")
structured_question = llm_tutor.get_structured_question(question)
print("\nGenerating response...")
response = llm_tutor.get_response(structured_question, stream=True)

question = input("Do you wish me to answer any of the follow up questions? (y/n): ")
if question == "y":
question = input("Enter the follow up question you want to ask: ")
else:
recur = False

except KeyboardInterrupt:
print("\n\nOperation cancelled by user.")
except Exception as e:
print(f"\nError: {e}")

if __name__ == "__main__":
main()

Loading