Skip to content

Updating the tutorial to use the V5 API #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Humanloop API Quickstart - Python example app

This is an example app that shows you how to use the Humanloop API in a GPT-3 app. It uses the [Flask](https://flask.palletsprojects.com/en/2.2.x/) web framework and [Humanloop](https://humanloop.com) for data logging and model improvement. Check out the tutorial or follow the instructions below to get set up.
This is an example app that shows you how to use the Humanloop API in a GPT-4 app. It uses the [Flask](https://flask.palletsprojects.com/en/2.3.x/) web framework and [Humanloop](https://humanloop.com) for data logging and model improvement. Check out the tutorial or follow the instructions below to get set up.

## Setup

Expand Down Expand Up @@ -30,15 +30,15 @@ This is an example app that shows you how to use the Humanloop API in a GPT-3 ap
6. Make a copy of the example environment variables file

```bash
$ cp .env.example .env
$ cp .example.env .env
```

7. Add your [OpenAI API key](https://beta.openai.com/account/api-keys) and [Humanloop API key](https://app.humanloop.com/llama/settings) to the newly created `.env` file
7. Add your [OpenAI API key](https://platform.openai.com/api-keys) and [Humanloop API key](https://app.humanloop.com/account/api-keys) to the newly created `.env` file

8. Run the app

```bash
$ flask --debug run
```

You should now be able to access the app at [http://localhost:5000](http://localhost:5000)! For the full context behind this example app, check out the [tutorial](https://beta.openai.com/docs/quickstart).
You should now be able to access the app at [http://localhost:5000](http://localhost:5000)! For the full context behind this example app, check out the [tutorial](https://platform.openai.com/docs/quickstart).
98 changes: 55 additions & 43 deletions app.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,99 +2,111 @@

from humanloop import Humanloop
from flask import Flask, redirect, render_template, request, url_for
from dotenv import load_dotenv


load_dotenv()
app = Flask(__name__)

HUMANLOOP_API_KEY = os.getenv("HUMANLOOP_API_KEY")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

humanloop = Humanloop(
api_key=HUMANLOOP_API_KEY,
)
hl = Humanloop(api_key=HUMANLOOP_API_KEY)


@app.route("/", methods=["GET"])
def index():
return render_template(
"index.html",
result=request.args.get("result"),
data_id=request.args.get("data_id"),
feedback=request.args.get("feedback"),
copied=request.args.get("copied", False),
log_id=request.args.get("log_id"),
evaluation=request.args.get("evaluation"),
)


@app.route("/get-question", methods=["POST"])
def get_question():
# Make the request to GPT-3
# Make the request to GPT-4
expert = request.form["Expert"]
topic = request.form["Topic"]

# hl.complete automatically logs the data to your project.
complete_response = humanloop.complete_deployed(
project="learn-anything",
# hl.prompts.call automatically logs the data to your prompt.
call_response = hl.prompts.call(
path="learn-anything",
inputs={"expert": expert, "topic": topic},
# If you havent previously created a prompt, you can uncomment the prompt template below
# prompt={
# "template": [
# {
# "role": "system",
# "content": "You are {{expert}}. Write a joke about {{topic}}.",
# }
# ],
# "model": "gpt-4",
# },
provider_api_keys={"openai": OPENAI_API_KEY},
)
data_id = complete_response.body["data"][0]["id"]
result = complete_response.body["data"][0]["output"]

print("data_id from completion: ", data_id)
return redirect(url_for("index", result=result, data_id=data_id))
# The log_id is the ID of the log on Humanloop that was created by the call
log_id = call_response.id
log_response = call_response.logs[0]

return redirect(url_for("index", result=log_response.output, log_id=log_id))


@app.route("/actions/thumbs-up", methods=["POST"])
def thumbs_up():
data_id = request.args.get("data_id")
log_id = request.args.get("log_id")

# We fetch the log from Humanloop to find which prompt it's associated with
log = hl.logs.get(id=log_id)
prompt_id = log.prompt.id

# We fetch the prompt to find which evaluator it's associated with
prompt = hl.prompts.get(id=prompt_id)

# Send rating feedback to Humanloop
humanloop.feedback(type="rating", value="good", data_id=data_id)
print(f"Recorded 👍 feedback to datapoint: {data_id}")
# Note this will be empty if you haven't assigned a monitoring evaluator to your prompt
evaluator_id = prompt.evaluators[0].version_reference.file.id

# Send rating evaluation to Humanloop using the evaluator_id and log_id
hl.evaluators.log(parent_id=log_id, id=evaluator_id, judgment="2")
print(f"Recorded 👍 evaluation to log: {log_id}")

return redirect(
url_for(
"index",
result=request.args.get("result"),
data_id=data_id,
feedback="👍",
log_id=log_id,
evaluation="👍",
copied=request.args.get("copied", False),
)
)


@app.route("/actions/thumbs-down", methods=["POST"])
def thumbs_down():
data_id = request.args.get("data_id")

# Send rating feedback to Humanloop
humanloop.feedback(type="rating", value="bad", data_id=data_id)
print(f"Recorded 👎 feedback to datapoint: {data_id}")
log_id = request.args.get("log_id")

return redirect(
url_for(
"index",
result=request.args.get("result"),
data_id=data_id,
feedback="👎",
copied=request.args.get("copied", False),
)
)
# We fetch the log from Humanloop to find which prompt it's associated with
log = hl.logs.get(id=log_id)
prompt_id = log.prompt.id

# We fetch the prompt to find which evaluator it's associated with
prompt = hl.prompts.get(id=prompt_id)

@app.route("/actions/copy", methods=["POST"])
def feedback():
data_id = request.args.get("data_id")
# Note this will be empty if you haven't assigned a monitoring evaluator to your prompt
evaluator_id = prompt.evaluators[0].version_reference.file.id

# Send implicit feedback to Humanloop
humanloop.feedback(type="action", value="copy", data_id=data_id)
print(f"Recorded implicit feedback that user copied to datapoint: {data_id}")
# Send rating evaluation to Humanloop using the evaluator_id and log_id
hl.evaluators.log(parent_id=log_id, id=evaluator_id, judgment="1")
print(f"Recorded 👎 evaluation to log: {log_id}")

return redirect(
url_for(
"index",
result=request.args.get("result"),
data_id=data_id,
feedback=request.args.get("feedback"),
copied=True,
log_id=log_id,
evaluation="👎",
copied=request.args.get("copied", False),
)
)
6 changes: 3 additions & 3 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Flask==2.3.3
humanloop==0.5.6
openai==0.27.8
python-dotenv==1.0.0
humanloop==0.8.0b6
openai==1.42.0
python-dotenv==1.0.1
41 changes: 16 additions & 25 deletions templates/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -6,56 +6,47 @@
href="{{ url_for('static', filename='learning.png') }}"
/>
<link rel="stylesheet" href="{{ url_for('static', filename='main.css') }}" />
<script>
function copyToClipboard() {
const element = document.getElementById("result");
navigator.clipboard.writeText(element.textContent);
}
</script>
</head>

<body class="column">
<a href="/" class="column">
<img src="{{ url_for('static', filename='learning.png') }}" width=100 />
<img src="{{ url_for('static', filename='learning.png') }}" width="100" />
<h3>Learn anything from anyone</h3>
</a>
<form class="input-form" action="/get-question" method="post">
<input type="text" name="Expert" placeholder="Enter the name of an expert" required />
<input
type="text"
name="Expert"
placeholder="Enter the name of an expert"
required
/>
<input type="text" name="Topic" placeholder="Enter a topic" required />
<input type="submit" class="hover" value="Teach me!" />
</form>
{% if result %}
<div class="result-wrapper">
<div class="result" style="width: 500px" id="result">{{ result }}</div>
</div>
<div class="evaluation">
<form
class="submit-form"
action="/actions/copy?data_id={{ data_id }}&result={{ result }}&feedback={{ feedback }}"
action="/actions/thumbs-up?log_id={{ log_id }}&result={{ result }}"
method="post"
>
<button
type="submit"
class="copy hover {{ 'submitted' if copied == 'True' }}"
title="Copy to clipboard"
onclick="copyToClipboard()"
class="hover {{ 'submitted' if evaluation == '👍' }}"
>
📋
</button>
</form>
</div>
<div class="feedback">
<form
action="/actions/thumbs-up?data_id={{ data_id }}&result={{ result }}&copied={{ copied }}"
method="post"
>
<button type="submit" class="hover {{ 'submitted' if feedback == '👍' }}">
👍
</button>
</form>
<form
action="/actions/thumbs-down?data_id={{ data_id }}&result={{ result }}&copied={{ copied }}"
action="/actions/thumbs-down?log_id={{ log_id }}&result={{ result }}"
method="post"
>
<button type="submit" class="hover {{ 'submitted' if feedback == '👎' }}">
<button
type="submit"
class="hover {{ 'submitted' if evaluation == '👎' }}"
>
👎
</button>
</form>
Expand Down