Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,4 +29,13 @@ curl -X POST "http://localhost:8000/evaluate_conversation" \
-d '{
"conversation": "Steve: Alright, so we all agree on the idea—an AI-powered restaurant reservation agent. It checks availability, books a table, even suggests options based on user preferences.\nAshwini: Yeah, and it should be able to call or message restaurants that don’t have an online system.\nJose: Love it. So for the agent framework, LlamaIndex seems like a no-brainer. It’s one of the sponsors, and it’s solid for retrieval-augmented generation.\nAshwini: Agreed. Plus, we get points for using sponsor tools. Let’s lock that in.\n(They nod, moving on to the next decision.)\nSteve: Now, the LLM. Gemini’s a sponsor, so we’d benefit from using it. But OpenAI o1 is... well, OpenAI o1. It’s top-tier.\nJose: Yeah, but we’re optimizing for both performance and hackathon perks. Gemini’s not bad—it gives decent results, and it keeps us in good standing with the event judges.\nAshwini: True, but we’re not required to use sponsor tools. If OpenAI’s going to get us better responses, wouldn’t that be worth it?\nSteve: Maybe. But is the improvement enough to justify skipping a sponsor?\n(They pause, unsure. Then, Jose pivots to another key issue.)\nJose: Okay, what about connecting to the web? We need real-time data—restaurant availability, location details, even user reviews.\nAshwini: AgentQL is a sponsor, so it’d be cheaper for us. But I have no clue how tricky it is to implement.\nSteve: Yeah, I haven’t seen many people using it yet. On the other hand, Perplexity API is reliable. More expensive, but we know it works well.\nJose: So do we go for ease-of-use and proven reliability with Perplexity? Or do we save money and earn sponsor points with AgentQL?\nAshwini: That’s the problem. I don’t know how long AgentQL would take to set up. If it’s a pain, we could waste a lot of time.\nSteve: And if we pick OpenAI o1 for the LLM, plus Perplexity for web search, that’s two major non-sponsor choices. Could hurt us.\nJose: But if the quality is better, maybe that’s worth the risk?\nAshwini: We don’t have enough info. We need to check how hard AgentQL is to implement fast.\nSteve: Yeah, and we need to decide if Gemini is “good enough” or if OpenAI o1 is worth breaking from the sponsor incentives.\n(They look at each other, still unsure. The clock is ticking, and they need to make a call soon.)\nJose: Okay, let’s do some quick tests, maybe ask around. We need to settle this now."
}'
```

Get "UI Insights":
```
curl -X POST "http://localhost:8000/ui_insights" \
-H "Content-Type: application/json" \
-d '{
"conversation": "Steve: Alright, so we all agree on the idea—an AI-powered restaurant reservation agent. It checks availability, books a table, even suggests options based on user preferences.\nAshwini: Yeah, and it should be able to call or message restaurants that don’t have an online system.\nJose: Love it. So for the agent framework, LlamaIndex seems like a no-brainer. It’s one of the sponsors, and it’s solid for retrieval-augmented generation.\nAshwini: Agreed. Plus, we get points for using sponsor tools. Let’s lock that in.\n(They nod, moving on to the next decision.)\nSteve: Now, the LLM. Gemini’s a sponsor, so we’d benefit from using it. But OpenAI o1 is... well, OpenAI o1. It’s top-tier.\nJose: Yeah, but we’re optimizing for both performance and hackathon perks. Gemini’s not bad—it gives decent results, and it keeps us in good standing with the event judges.\nAshwini: True, but we’re not required to use sponsor tools. If OpenAI’s going to get us better responses, wouldn’t that be worth it?\nSteve: Maybe. But is the improvement enough to justify skipping a sponsor?\n(They pause, unsure. Then, Jose pivots to another key issue.)\nJose: Okay, what about connecting to the web? We need real-time data—restaurant availability, location details, even user reviews.\nAshwini: AgentQL is a sponsor, so it’d be cheaper for us. But I have no clue how tricky it is to implement.\nSteve: Yeah, I haven’t seen many people using it yet. On the other hand, Perplexity API is reliable. More expensive, but we know it works well.\nJose: So do we go for ease-of-use and proven reliability with Perplexity? Or do we save money and earn sponsor points with AgentQL?\nAshwini: That’s the problem. I don’t know how long AgentQL would take to set up. If it’s a pain, we could waste a lot of time.\nSteve: And if we pick OpenAI o1 for the LLM, plus Perplexity for web search, that’s two major non-sponsor choices. Could hurt us.\nJose: But if the quality is better, maybe that’s worth the risk?\nAshwini: We don’t have enough info. We need to check how hard AgentQL is to implement fast.\nSteve: Yeah, and we need to decide if Gemini is “good enough” or if OpenAI o1 is worth breaking from the sponsor incentives.\n(They look at each other, still unsure. The clock is ticking, and they need to make a call soon.)\nJose: Okay, let’s do some quick tests, maybe ask around. We need to settle this now."
}'
```
Binary file modified src/conflict_resolution_bot/__pycache__/app.cpython-312.pyc
Binary file not shown.
Binary file modified src/conflict_resolution_bot/__pycache__/evaluate.cpython-312.pyc
Binary file not shown.
Binary file not shown.
Binary file modified src/conflict_resolution_bot/__pycache__/main.cpython-312.pyc
Binary file not shown.
Binary file modified src/conflict_resolution_bot/__pycache__/objective.cpython-312.pyc
Binary file not shown.
96 changes: 96 additions & 0 deletions src/conflict_resolution_bot/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
from typing import List
from pydantic import BaseModel
import weave
import json
from litellm import completion

# Import the existing app
from app import app
Expand All @@ -24,6 +26,10 @@ class CombinedOutput(BaseModel):
Evaluate: list
knowledge: list

class UIInsightsOutput(BaseModel):
Objective: list
insights: list

weave.init('hackathon-example')

#
Expand Down Expand Up @@ -69,6 +75,96 @@ def combined_insights(input_data: CombinedInput):
"knowledge": knowledge_data
}

#
# UI Insights Endpoint
#
@app.post("/ui_insights", response_model=UIInsightsOutput)
def UI_insights(input_data: CombinedInput):
"""
This endpoint:
1) Calls 'generate_objective_logic'
2) Calls 'evaluate_conversation_logic'
3) Feeds both outputs into 'knowledge_logic'
4) Calls an additional sambanova call to summarise these results into a single JSON object
with the key 'insights' and a value that is a string detailing the most prioritized ideas.
5) Returns a JSON with keys "objectives" and "insights".
"""
# For demonstration purposes, assume that input_data is available in the request scope.
# In practice, you might change this to a POST endpoint that receives a request body.
# Here, we assume 'input_data.conversation' is provided.

# 1) Objective
objective_data = generate_objective_logic(
ObjectiveInput(conversation=input_data.conversation)
)

# 2) Evaluate
evaluate_data = evaluate_conversation_logic(
EvaluateInput(conversation=input_data.conversation)
)

# 3) Build knowledge questions from the objective & evaluate outputs
questions_list: List[str] = []
for obj_item in objective_data:
if "Objective" in obj_item:
questions_list.append(obj_item["Objective"])
for eval_item in evaluate_data:
if "Information_request" in eval_item:
questions_list.append(eval_item["Information_request"])

# Get knowledge data based on collected questions
knowledge_data = knowledge_logic(KnowledgeInput(questions=questions_list))

# 4) Summarise the combined data for frontend insights.
system_prompt = """
You are an assistant tasked with summarising details from a discussion.

Identify and prioritise the key ideas and insights for each choice raised in the discussion.

Output a valid JSON object with 2 keys "objective", "insights" whose value is a string summary of the possible choices and their respective insights.

The output:

- MUST be valid JSON conforming to the schema below:
[
{
"Objective": "some string"
"insights": "some string"
}
]
- MUST NOT include additional commentary or formatting.
"""

# Aggregate all extracted data into a string for summarisation.
aggregated_content = json.dumps({
"Objective": objective_data,
"evaluations": evaluate_data,
"knowledge": knowledge_data
})

insights_response = completion(
model="sambanova/Meta-Llama-3.1-8B-Instruct",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": aggregated_content}
],
response_format={"type": "json_object"}
)

try:
insights_content = insights_response["choices"][0]["message"]["content"]
insights_json = json.loads(insights_content)
insights = insights_json.get("insights", "")
except Exception as e:
raise ValueError("Failed to parse insights summary output as valid JSON object.") from e

# 5) Return final JSON for the frontend
return {
"Objective": objective_data,
"insights": insights
}



# If you run this file directly:
if __name__ == "__main__":
Expand Down
2 changes: 1 addition & 1 deletion src/conflict_resolution_bot/objective.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ def generate_objective_logic(input_data: ObjectiveInput) -> List[dict]:
messages=messages,
response_format={"type": "json_object"}
)

print(os.environ.get("SAMBANOVA_API_KEY"))
try:
content = response["choices"][0]["message"]["content"]
objectives = json.loads(content)
Expand Down