-
Notifications
You must be signed in to change notification settings - Fork 3k
Community Contribution for #Week 1: add Alter-Ego chatbot using AzurwOpenAI and Gradio #553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
… OpenAI and Gradio
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request adds a community contribution for an Alter-Ego chatbot that represents users on their websites using Azure OpenAI's GPT-4o-mini and Gradio. The chatbot loads professional information from PDF resumes and text summaries, answers visitor questions, captures interested visitor emails, logs unanswered questions, and sends engagement notifications via Pushover.
Changes:
- Added a complete chatbot application with Azure OpenAI integration and Gradio interface
- Implemented tool functions for capturing user details and logging unknown questions with Pushover notifications
- Included configuration files, documentation, and example files for easy customization
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 12 comments.
Show a summary per file
| File | Description |
|---|---|
| tools.py | Implements Pushover notifications and tool functions for email capture and question logging |
| agent.py | Main conversation agent class with Azure OpenAI chat completion loop and tool call handling |
| prompt.py | Loads professional data from PDF and text files to build the system prompt |
| main.py | Entry point that initializes the agent and launches the Gradio chat interface |
| pyproject.toml | Project metadata and dependencies (Python 3.12+, gradio, openai, pypdf, python-dotenv, requests) |
| README.md | Complete setup instructions, usage guide, and customization notes |
| .gitignore | Excludes sensitive files, Python artifacts, and IDE/OS specific files |
| .env.example | Template for Azure OpenAI and Pushover API credentials |
| static/summary.txt.example | Example template for users to create their professional summary |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| def record_user_details(email, name="Name not provided", notes="not provided"): | ||
| """ | ||
| Record user contact details when they express interest. | ||
|
|
||
| Args: | ||
| email (str): User's email address | ||
| name (str): User's name (optional) | ||
| notes (str): Additional context about the conversation | ||
|
|
||
| Returns: | ||
| dict: Status confirmation | ||
| """ | ||
| push(f"Recording interest from {name} with email {email} and notes {notes}") | ||
| return {"recorded": "ok", "message": "Thank you! Your information has been recorded."} |
Copilot
AI
Jan 17, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The email parameter should be validated to ensure it's a properly formatted email address before recording. Without validation, malformed or malicious input could be stored and sent via push notifications.
| def push(message): | ||
| """Send a push notification via Pushover API.""" | ||
| print(f"Push: {message}") | ||
| payload = {"user": pushover_user, "token": pushover_token, "message": message} | ||
| try: | ||
| requests.post(pushover_url, data=payload) | ||
| except Exception as e: | ||
| print(f"Error sending push notification: {e}") |
Copilot
AI
Jan 17, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing error handling for when Pushover credentials are not configured. If PUSHOVER_USER or PUSHOVER_TOKEN are None or empty, the function will still attempt to send a request with invalid credentials, which could fail silently or cause unexpected behavior.
| results = [] | ||
| for tool_call in tool_calls: | ||
| tool_name = tool_call.function.name | ||
| arguments = json.loads(tool_call.function.arguments) |
Copilot
AI
Jan 17, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The JSON parsing of tool call arguments lacks error handling. If the LLM returns malformed JSON in tool_call.function.arguments, json.loads will raise a JSONDecodeError that is not caught, potentially crashing the application.
| arguments = json.loads(tool_call.function.arguments) | |
| raw_arguments = tool_call.function.arguments | |
| try: | |
| arguments = json.loads(raw_arguments) | |
| except json.JSONDecodeError as e: | |
| print( | |
| f"Error decoding JSON arguments for tool '{tool_name}': {e}. " | |
| f"Raw arguments: {raw_arguments}", | |
| flush=True, | |
| ) | |
| raise ValueError( | |
| f"Invalid JSON in arguments for tool '{tool_name}'" | |
| ) from e |
| done = False | ||
| while not done: | ||
| response = self.client.chat.completions.create( | ||
| model="gpt-4o-mini", | ||
| messages=messages, | ||
| tools=TOOLS | ||
| ) | ||
|
|
||
| finish_reason = response.choices[0].finish_reason | ||
|
|
||
| if finish_reason == "tool_calls": | ||
| message_with_tool_calls = response.choices[0].message | ||
| tool_calls = message_with_tool_calls.tool_calls | ||
| tool_results = handle_tool_calls(tool_calls) | ||
|
|
||
| messages.append(message_with_tool_calls) | ||
| messages.extend(tool_results) | ||
| else: | ||
| done = True | ||
|
|
||
| return response.choices[0].message.content |
Copilot
AI
Jan 17, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The chat method lacks error handling for API failures. If the Azure OpenAI API call fails due to network issues, rate limits, or API errors, the exception will propagate to Gradio causing a poor user experience. Consider wrapping the API call in a try-except block to return a graceful error message to the user.
| done = False | |
| while not done: | |
| response = self.client.chat.completions.create( | |
| model="gpt-4o-mini", | |
| messages=messages, | |
| tools=TOOLS | |
| ) | |
| finish_reason = response.choices[0].finish_reason | |
| if finish_reason == "tool_calls": | |
| message_with_tool_calls = response.choices[0].message | |
| tool_calls = message_with_tool_calls.tool_calls | |
| tool_results = handle_tool_calls(tool_calls) | |
| messages.append(message_with_tool_calls) | |
| messages.extend(tool_results) | |
| else: | |
| done = True | |
| return response.choices[0].message.content | |
| try: | |
| done = False | |
| while not done: | |
| response = self.client.chat.completions.create( | |
| model="gpt-4o-mini", | |
| messages=messages, | |
| tools=TOOLS | |
| ) | |
| finish_reason = response.choices[0].finish_reason | |
| if finish_reason == "tool_calls": | |
| message_with_tool_calls = response.choices[0].message | |
| tool_calls = message_with_tool_calls.tool_calls | |
| tool_results = handle_tool_calls(tool_calls) | |
| messages.append(message_with_tool_calls) | |
| messages.extend(tool_results) | |
| else: | |
| done = True | |
| return response.choices[0].message.content | |
| except Exception as e: | |
| # Optional: log the exception for debugging purposes | |
| print(f"Error during Azure OpenAI chat completion: {e}") | |
| return "Sorry, I'm having trouble reaching the AI service right now. Please try again later." |
| - Make sure `static/profile.pdf` and `static/summary.txt` exist or the agent will use placeholder text | ||
| - The chatbot stays in character as you and prioritizes answering from your provided context |
Copilot
AI
Jan 17, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The static directory is referenced in the code but not included in the repository. Users will need to manually create this directory before placing their profile.pdf and summary.txt files. Consider including the static directory with .gitkeep or placeholder files to ensure the directory structure is clear.
| done = False | ||
| while not done: | ||
| response = self.client.chat.completions.create( | ||
| model="gpt-4o-mini", | ||
| messages=messages, | ||
| tools=TOOLS | ||
| ) | ||
|
|
||
| finish_reason = response.choices[0].finish_reason | ||
|
|
||
| if finish_reason == "tool_calls": | ||
| message_with_tool_calls = response.choices[0].message | ||
| tool_calls = message_with_tool_calls.tool_calls | ||
| tool_results = handle_tool_calls(tool_calls) | ||
|
|
||
| messages.append(message_with_tool_calls) | ||
| messages.extend(tool_results) | ||
| else: | ||
| done = True |
Copilot
AI
Jan 17, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The while loop that handles tool calls has no maximum iteration limit. If the LLM continuously returns tool_calls without reaching a stop condition, this could result in an infinite loop, excessive API calls, and high costs. Consider adding a max_iterations counter to prevent runaway execution.
| done = False | ||
| while not done: | ||
| response = self.client.chat.completions.create( | ||
| model="gpt-4o-mini", |
Copilot
AI
Jan 17, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The hardcoded model name "gpt-4o-mini" should be configurable or match the deployment name. Azure OpenAI uses deployment names, not model names directly. This parameter might be unnecessary or could cause confusion if the deployment uses a different model.
| model="gpt-4o-mini", | |
| model=os.getenv("AZURE_OPENAI_DEPLOYMENT", "gpt-4o-mini"), |
| 3. **Add your data:** | ||
| - Place your resume/LinkedIn PDF as `static/profile.pdf` | ||
| - Create `static/summary.txt` with a brief professional summary | ||
|
|
Copilot
AI
Jan 17, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The README instructs users to place static files (profile.pdf and summary.txt) but doesn't mention that these are required for the application to function properly. Consider adding a note about creating the static directory if it doesn't exist, or including placeholder files to make the setup clearer.
| import os | ||
|
|
||
|
|
||
| def load_linkedin_profile(pdf_path="static/profile.pdf"): | ||
| """Load and extract text from LinkedIn profile PDF.""" | ||
| if os.path.exists(pdf_path): | ||
| reader = PdfReader(pdf_path) | ||
| content = "" | ||
| for page in reader.pages: | ||
| text = page.extract_text() | ||
| if text: | ||
| content += text | ||
| return content |
Copilot
AI
Jan 17, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The load_linkedin_profile function could fail silently if the PDF is corrupted or unreadable. Consider adding error handling around the PDF reading operations to catch PdfReader exceptions and provide a more informative error message.
| import os | |
| def load_linkedin_profile(pdf_path="static/profile.pdf"): | |
| """Load and extract text from LinkedIn profile PDF.""" | |
| if os.path.exists(pdf_path): | |
| reader = PdfReader(pdf_path) | |
| content = "" | |
| for page in reader.pages: | |
| text = page.extract_text() | |
| if text: | |
| content += text | |
| return content | |
| from pypdf.errors import PdfReadError | |
| import os | |
| def load_linkedin_profile(pdf_path="static/profile.pdf"): | |
| """Load and extract text from LinkedIn profile PDF.""" | |
| if os.path.exists(pdf_path): | |
| try: | |
| reader = PdfReader(pdf_path) | |
| content = "" | |
| for page in reader.pages: | |
| text = page.extract_text() | |
| if text: | |
| content += text | |
| # If the PDF was readable but contained no extractable text | |
| return content or "Profile PDF could not be read or was empty." | |
| except PdfReadError as e: | |
| return f"Error reading profile PDF: {e}" | |
| except Exception: | |
| # Fallback for any other unexpected error during PDF processing | |
| return "An unexpected error occurred while reading the profile PDF." |
|
|
||
|
|
||
| def main(): | ||
| """Initialize and launch the chat interface.""" | ||
| from gradio.chat_interface import ChatInterface | ||
|
|
||
| # TODO: Change this to your actual name | ||
| agent = ConversationAgent(name="Harsh Patel") |
Copilot
AI
Jan 17, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name "Harsh Patel" is hardcoded as the default value in multiple places (main.py line 9 and agent.py line 11). This creates a maintenance burden if someone wants to change it. Consider centralizing this configuration, perhaps in a config file or as an environment variable, to avoid needing to update it in multiple locations.
| def main(): | |
| """Initialize and launch the chat interface.""" | |
| from gradio.chat_interface import ChatInterface | |
| # TODO: Change this to your actual name | |
| agent = ConversationAgent(name="Harsh Patel") | |
| import os | |
| def main(): | |
| """Initialize and launch the chat interface.""" | |
| from gradio.chat_interface import ChatInterface | |
| # TODO: Change this to your actual name | |
| default_name = os.getenv("CHATBOT_DEFAULT_NAME", "Harsh Patel") | |
| agent = ConversationAgent(name=default_name) |
Alter-Ego Chatbot - Azure OpenAI & Gradio
Overview
A professional chatbot that represents you on your website using Azure OpenAI's GPT-4o-mini and Gradio interface.
Features
Tech Stack
UI
What's Included
pyproject.toml.env.example.gitignore(no secrets or bloat)Community Contribution
Located in:
1_foundations/community_contributions/alter-ego-gradio-chatbot-usingAzureOpenaiTotal size: 6.38 KiB - lightweight and clean! ✨