Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Azure OpenAI Configuration
AZURE_OPENAI_API_KEY=your_azure_openai_api_key_here
AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
AZURE_OPENAI_DEPLOYMENT=gpt-4o-mini

# Pushover Notifications (for visitor engagement alerts)
PUSHOVER_USER=your_pushover_user_key
PUSHOVER_TOKEN=your_pushover_app_token
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Environment variables (contains API keys)
.env
.env.local
.env.production

# Python
__pycache__/
*.py[cod]
*$py.class
*.egg-info/
dist/
build/

# Virtual environments
venv/
.venv/
env/
ENV/

# IDE
.vscode/
.idea/
*.swp
*.swo

# OS
.DS_Store
Thumbs.db

# Logs
*.log

uv.lock
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Alter-Ego Chatbot

A professional chatbot that represents you on your website. It answers questions about your background, experience, and skills using Azure OpenAI and Gradio.

## What It Does

- Loads your professional info from a PDF resume/LinkedIn profile and text summary
- Responds to visitor questions about you using Azure OpenAI's GPT-4o-mini
- Captures interested visitor emails and logs unanswered questions
- Sends notifications via Pushover when users engage

## Quick Start

### Requirements
- Python 3.12+
- Azure OpenAI API key and deployment name
- Pushover API credentials (for notifications)

### Setup

1. **Clone and install dependencies:**
```bash
pip install -e .
```

2. **Create a `.env` file with:**
```bash
cp .env.example .env
```
Then edit `.env` with your actual values:
```
AZURE_OPENAI_API_KEY=your_key
AZURE_OPENAI_ENDPOINT=your_endpoint
AZURE_OPENAI_DEPLOYMENT=gpt-4o-mini
PUSHOVER_USER=your_pushover_user
PUSHOVER_TOKEN=your_pushover_token
```

3. **Add your data:**
- Place your resume/LinkedIn PDF as `static/profile.pdf`
- Create `static/summary.txt` with a brief professional summary

Comment on lines +39 to +42
Copy link

Copilot AI Jan 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The README instructs users to place static files (profile.pdf and summary.txt) but doesn't mention that these are required for the application to function properly. Consider adding a note about creating the static directory if it doesn't exist, or including placeholder files to make the setup clearer.

Copilot uses AI. Check for mistakes.
### Run It

```bash
python main.py
```

Opens a chat interface at `http://localhost:7860`

## How It Works

- **agent.py**: Main chat loop using Azure OpenAI
- **prompt.py**: Loads your profile data and builds the system prompt
- **tools.py**: Handles user email capture and logging unknown questions
- **main.py**: Launches the Gradio interface

## Customization

Edit `main.py` to change:
- Your name in `ConversationAgent(name="Your Name")`
- Chat title and description
- Example questions

## Notes

- Make sure `static/profile.pdf` and `static/summary.txt` exist or the agent will use placeholder text
- The chatbot stays in character as you and prioritizes answering from your provided context
Comment on lines +67 to +68
Copy link

Copilot AI Jan 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The static directory is referenced in the code but not included in the repository. Users will need to manually create this directory before placing their profile.pdf and summary.txt files. Consider including the static directory with .gitkeep or placeholder files to ensure the directory structure is clear.

Copilot uses AI. Check for mistakes.
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
from openai import AzureOpenAI
from dotenv import load_dotenv
import os
from tools import handle_tool_calls, TOOLS
from prompt import build_system_prompt

load_dotenv()


class ConversationAgent:
def __init__(self, name="Harsh Patel"):
"""Initialize the agent with Azure OpenAI client and system prompt."""
self.client = AzureOpenAI(
azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT")
Comment on lines +13 to +14
Copy link

Copilot AI Jan 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling for when Azure OpenAI credentials are not properly configured. If AZURE_OPENAI_DEPLOYMENT is None or the required environment variables (AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT) are missing, the AzureOpenAI client initialization could fail or behave unexpectedly.

Suggested change
self.client = AzureOpenAI(
azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT")
azure_deployment = os.getenv("AZURE_OPENAI_DEPLOYMENT")
azure_api_key = os.getenv("AZURE_OPENAI_API_KEY")
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
missing_vars = [
var_name
for var_name, value in [
("AZURE_OPENAI_DEPLOYMENT", azure_deployment),
("AZURE_OPENAI_API_KEY", azure_api_key),
("AZURE_OPENAI_ENDPOINT", azure_endpoint),
]
if not value
]
if missing_vars:
raise RuntimeError(
f"Missing required Azure OpenAI configuration: {', '.join(missing_vars)}"
)
self.client = AzureOpenAI(
azure_deployment=azure_deployment,
api_key=azure_api_key,
azure_endpoint=azure_endpoint,

Copilot uses AI. Check for mistakes.
)
self.name = name
self.system_prompt = build_system_prompt(name)

def chat(self, message, history):
messages = (
[{"role": "system", "content": self.system_prompt}]
+ history
+ [{"role": "user", "content": message}]
)

done = False
while not done:
response = self.client.chat.completions.create(
model="gpt-4o-mini",
Copy link

Copilot AI Jan 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The hardcoded model name "gpt-4o-mini" should be configurable or match the deployment name. Azure OpenAI uses deployment names, not model names directly. This parameter might be unnecessary or could cause confusion if the deployment uses a different model.

Suggested change
model="gpt-4o-mini",
model=os.getenv("AZURE_OPENAI_DEPLOYMENT", "gpt-4o-mini"),

Copilot uses AI. Check for mistakes.
messages=messages,
tools=TOOLS
)

finish_reason = response.choices[0].finish_reason

if finish_reason == "tool_calls":
message_with_tool_calls = response.choices[0].message
tool_calls = message_with_tool_calls.tool_calls
tool_results = handle_tool_calls(tool_calls)

messages.append(message_with_tool_calls)
messages.extend(tool_results)
else:
done = True
Comment on lines +26 to +44
Copy link

Copilot AI Jan 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The while loop that handles tool calls has no maximum iteration limit. If the LLM continuously returns tool_calls without reaching a stop condition, this could result in an infinite loop, excessive API calls, and high costs. Consider adding a max_iterations counter to prevent runaway execution.

Copilot uses AI. Check for mistakes.

return response.choices[0].message.content
Comment on lines +26 to +46
Copy link

Copilot AI Jan 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The chat method lacks error handling for API failures. If the Azure OpenAI API call fails due to network issues, rate limits, or API errors, the exception will propagate to Gradio causing a poor user experience. Consider wrapping the API call in a try-except block to return a graceful error message to the user.

Suggested change
done = False
while not done:
response = self.client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
tools=TOOLS
)
finish_reason = response.choices[0].finish_reason
if finish_reason == "tool_calls":
message_with_tool_calls = response.choices[0].message
tool_calls = message_with_tool_calls.tool_calls
tool_results = handle_tool_calls(tool_calls)
messages.append(message_with_tool_calls)
messages.extend(tool_results)
else:
done = True
return response.choices[0].message.content
try:
done = False
while not done:
response = self.client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
tools=TOOLS
)
finish_reason = response.choices[0].finish_reason
if finish_reason == "tool_calls":
message_with_tool_calls = response.choices[0].message
tool_calls = message_with_tool_calls.tool_calls
tool_results = handle_tool_calls(tool_calls)
messages.append(message_with_tool_calls)
messages.extend(tool_results)
else:
done = True
return response.choices[0].message.content
except Exception as e:
# Optional: log the exception for debugging purposes
print(f"Error during Azure OpenAI chat completion: {e}")
return "Sorry, I'm having trouble reaching the AI service right now. Please try again later."

Copilot uses AI. Check for mistakes.
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
from agent import ConversationAgent


def main():
"""Initialize and launch the chat interface."""
from gradio.chat_interface import ChatInterface

# TODO: Change this to your actual name
agent = ConversationAgent(name="Harsh Patel")
Comment on lines +2 to +9
Copy link

Copilot AI Jan 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name "Harsh Patel" is hardcoded as the default value in multiple places (main.py line 9 and agent.py line 11). This creates a maintenance burden if someone wants to change it. Consider centralizing this configuration, perhaps in a config file or as an environment variable, to avoid needing to update it in multiple locations.

Suggested change
def main():
"""Initialize and launch the chat interface."""
from gradio.chat_interface import ChatInterface
# TODO: Change this to your actual name
agent = ConversationAgent(name="Harsh Patel")
import os
def main():
"""Initialize and launch the chat interface."""
from gradio.chat_interface import ChatInterface
# TODO: Change this to your actual name
default_name = os.getenv("CHATBOT_DEFAULT_NAME", "Harsh Patel")
agent = ConversationAgent(name=default_name)

Copilot uses AI. Check for mistakes.

ChatInterface(
fn=agent.chat,
title=f"Chat with {agent.name}",
description="Ask me anything about my professional background, experience, and skills.",
examples=[
"What's your background?",
"Tell me about your technical skills",
"What kind of projects have you worked on?",
],
).launch()


if __name__ == "__main__":
main()
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
from pypdf import PdfReader
import os


def load_linkedin_profile(pdf_path="static/profile.pdf"):
"""Load and extract text from LinkedIn profile PDF."""
if os.path.exists(pdf_path):
reader = PdfReader(pdf_path)
content = ""
for page in reader.pages:
text = page.extract_text()
if text:
content += text
return content
Comment on lines +2 to +14
Copy link

Copilot AI Jan 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The load_linkedin_profile function could fail silently if the PDF is corrupted or unreadable. Consider adding error handling around the PDF reading operations to catch PdfReader exceptions and provide a more informative error message.

Suggested change
import os
def load_linkedin_profile(pdf_path="static/profile.pdf"):
"""Load and extract text from LinkedIn profile PDF."""
if os.path.exists(pdf_path):
reader = PdfReader(pdf_path)
content = ""
for page in reader.pages:
text = page.extract_text()
if text:
content += text
return content
from pypdf.errors import PdfReadError
import os
def load_linkedin_profile(pdf_path="static/profile.pdf"):
"""Load and extract text from LinkedIn profile PDF."""
if os.path.exists(pdf_path):
try:
reader = PdfReader(pdf_path)
content = ""
for page in reader.pages:
text = page.extract_text()
if text:
content += text
# If the PDF was readable but contained no extractable text
return content or "Profile PDF could not be read or was empty."
except PdfReadError as e:
return f"Error reading profile PDF: {e}"
except Exception:
# Fallback for any other unexpected error during PDF processing
return "An unexpected error occurred while reading the profile PDF."

Copilot uses AI. Check for mistakes.
return "Profile PDF not found."


def load_summary(summary_path="static/summary.txt"):
"""Load the professional summary from text file."""
if os.path.exists(summary_path):
with open(summary_path, "r", encoding="utf-8") as f:
return f.read()
return "Summary text not found."


def build_system_prompt(name="Harsh Patel"):
summary = load_summary()
linkedin_profile = load_linkedin_profile()

prompt = f"""You are {name}'s AI representative on their professional website.

## Your Role and Responsibilities:

You represent {name} for all interactions on this website. Your primary goals are:

1. **Information Provider**: Answer questions about {name}'s:
- Professional background and experience
- Technical skills and expertise
- Education and achievements
- Career trajectory and current focus
- Notable projects and accomplishments

2. **Engagement Facilitator**:
- Maintain a professional yet personable tone
- Engage visitors as potential clients, collaborators, or employers
- Show genuine interest in the visitor's needs and questions
- Keep conversations focused and productive

3. **Lead Capture**:
- When appropriate, guide interested visitors toward direct contact
- Politely request contact information (especially email addresses)
- Use the record_user_details tool to capture visitor information
- Record context about why they're interested for follow-up

4. **Continuous Improvement**:
- Use record_unknown_question tool for ANY question you cannot confidently answer
- This includes questions about personal details, preferences, or anything not in your knowledge base
- Even trivial questions should be logged to improve future responses

## Communication Guidelines:

- Be conversational but professional
- Provide specific, relevant details from the available information
- If uncertain, acknowledge it gracefully and log the question
- Proactively suggest next steps (e.g., "Would you like to connect via email?")
- Avoid being overly salesy; focus on authentic value and connection

## Available Context:

### Professional Summary:
{summary}

### LinkedIn Profile:
{linkedin_profile}

## Important Notes:

- Always stay in character as {name}
- Use the provided context to give accurate, detailed responses
- When you don't know something, always log it with record_unknown_question
- Prioritize building genuine connections with visitors
- Your responses should reflect {name}'s professional voice and expertise

Now, engage with the visitor and represent {name} to the best of your ability."""

return prompt
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
[project]
name = "alter-ego-gradio-chatbot-usingazureopenai"
version = "0.1.0"
description = "A professional chatbot that represents you on your website. It answers questions about your background, experience, and skills using Azure OpenAI and Gradio."
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"gradio>=6.3.0",
"openai>=2.15.0",
"pypdf>=6.6.0",
"python-dotenv>=1.2.1",
"requests>=2.32.5",
]
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Example summary.txt

Replace this file with your own professional summary. This should be a comprehensive overview of:

- Your professional background and experience
- Technical skills and expertise areas
- Notable projects and achievements
- Education and certifications
- Personal interests and values
- Career objectives and what you're looking for

Keep it conversational but professional - this forms the foundation of how your AI chatbot will represent you.

Example structure:
- Start with a brief intro (role, location, years of experience)
- Highlight key technical skills and tools
- Mention domain expertise and notable projects
- Include education/background
- Add personal touches (hobbies, interests, values)
- End with professional philosophy or what you're seeking

See the current summary.txt for a complete example.
Loading
Loading