-
Notifications
You must be signed in to change notification settings - Fork 7
feat(rag): secret vault injection #64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 10 commits
e04e4e0
fc009f1
6599a49
7c6c361
44dcb57
28b8b21
8ba3c2c
219bafc
159cc3e
160cc7f
3987c5f
30ae379
ddbf08d
c805836
36117a3
cf62245
7a1e59b
25c78e1
1c0b377
7bae49d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -170,3 +170,5 @@ grafana/runtime-data/* | |
|
||
prometheus/data/* | ||
!prometheus/data/.gitkeep | ||
|
||
*.swp |
wwwehr marked this conversation as resolved.
Show resolved
Hide resolved
|
wwwehr marked this conversation as resolved.
Show resolved
Hide resolved
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does this differ significantly in behaviour from the original Deepseek 14B docker compose that we had before? If the behaviour is better (which I assume, given your improvements), would it make sense to merge both into a single Deepseek 14B file? This avoids having to maintain multiple source files after this. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I committed the new one so that we can discuss the differences to see if we want to integrate new points.
wdyt? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't really know. I have a discussion with ecosystem and product to decide which models we include by default. I wonder whether the worker model preserves privacy and it's something we are willing to live with.
wwwehr marked this conversation as resolved.
Show resolved
Hide resolved
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
services: | ||
deepseek_14b_gpu: | ||
build: | ||
context: . | ||
dockerfile: docker/vllm.Dockerfile | ||
deploy: | ||
resources: | ||
reservations: | ||
devices: | ||
- capabilities: | ||
- gpu | ||
driver: nvidia | ||
ipc: host | ||
depends_on: | ||
etcd: | ||
condition: service_healthy | ||
watt_tool_gpu: | ||
condition: service_healthy | ||
command: | ||
- --model | ||
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B | ||
- --max-model-len | ||
- "10000" | ||
- --device | ||
- cuda | ||
- --gpu-memory-utilization | ||
- "0.45" | ||
env_file: | ||
- .env | ||
environment: | ||
SVC_HOST: "deepseek_14b_gpu" | ||
SVC_PORT: "8000" | ||
ETCD_HOST: "etcd" | ||
ETCD_PORT: "2379" | ||
TOOL_SUPPORT: true | ||
MODEL_ROLE: "reasoning" | ||
networks: | ||
- backend_net | ||
volumes: | ||
- type: volume | ||
source: hugging_face_models | ||
target: /root/.cache/huggingface | ||
volume: {} | ||
healthcheck: | ||
test: ["CMD", "curl", "-f", "http://localhost:8000/health"] | ||
interval: 30s | ||
retries: 3 | ||
start_period: 60s | ||
timeout: 10s | ||
volumes: | ||
hugging_face_models: | ||
|
||
networks: | ||
backend_net: |
wwwehr marked this conversation as resolved.
Show resolved
Hide resolved
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How do we use this There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. from the client, I needed a way to distinguish between "this is a reasoning model" and "this is a worker model". rather than a client needing knowledge of the model itself, as a developer, I simply want a "tag" that informs me so that I can automatically select the kind of model I want. wdyt? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That sounds perfect. I agree. As I comment in the review, I think we should extend it to all models. Potentially, even considering it an |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,58 @@ | ||
services: | ||
llama_32_tool_gpu: | ||
build: | ||
context: . | ||
dockerfile: docker/vllm.Dockerfile | ||
deploy: | ||
resources: | ||
reservations: | ||
devices: | ||
- capabilities: | ||
- gpu | ||
driver: nvidia | ||
ipc: host | ||
depends_on: | ||
etcd: | ||
condition: service_healthy | ||
watt_tool_gpu: | ||
condition: service_healthy | ||
command: | ||
- --model | ||
- meta-llama/Llama-3.2-1B-Instruct | ||
- --max-model-len | ||
- "10000" | ||
- --device | ||
- cuda | ||
- --gpu-memory-utilization | ||
- "0.45" | ||
- --enable-auto-tool-choice | ||
- --tool-call-parser | ||
- llama3_json | ||
- --chat-template | ||
- /tmp/tool_chat_template.jinja | ||
env_file: | ||
- .env | ||
environment: | ||
SVC_HOST: "llama_32_tool_gpu" | ||
SVC_PORT: "8000" | ||
ETCD_HOST: "etcd" | ||
ETCD_PORT: "2379" | ||
TOOL_SUPPORT: true | ||
MODEL_ROLE: "reasoning" | ||
networks: | ||
- backend_net | ||
volumes: | ||
- type: volume | ||
source: hugging_face_models | ||
target: /root/.cache/huggingface | ||
volume: {} | ||
- type: bind | ||
source: $PWD/docker/compose/tool_chat_template_llama3.2_json.jinja | ||
target: /tmp/tool_chat_template.jinja | ||
bind: | ||
create_host_path: true | ||
volumes: | ||
hugging_face_models: | ||
|
||
networks: | ||
backend_net: |
wwwehr marked this conversation as resolved.
Show resolved
Hide resolved
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,62 @@ | ||
services: | ||
watt_tool_gpu: | ||
build: | ||
context: . | ||
dockerfile: docker/vllm.Dockerfile | ||
deploy: | ||
resources: | ||
reservations: | ||
devices: | ||
- capabilities: | ||
- gpu | ||
driver: nvidia | ||
ipc: host | ||
depends_on: | ||
etcd: | ||
condition: service_healthy | ||
command: | ||
- --model | ||
- watt-ai/watt-tool-8B | ||
- --max-model-len | ||
- "10000" | ||
- --device | ||
- cuda | ||
- --gpu-memory-utilization | ||
- "0.45" | ||
- --enable-auto-tool-choice | ||
- --tool-call-parser | ||
- llama3_json | ||
- --chat-template | ||
- /tmp/tool_chat_template.jinja | ||
env_file: | ||
- .env | ||
environment: | ||
SVC_HOST: "watt_tool_gpu" | ||
SVC_PORT: "8000" | ||
ETCD_HOST: "etcd" | ||
ETCD_PORT: "2379" | ||
TOOL_SUPPORT: true | ||
MODEL_ROLE: "worker" | ||
networks: | ||
- backend_net | ||
volumes: | ||
- type: volume | ||
source: hugging_face_models | ||
target: /root/.cache/huggingface | ||
volume: {} | ||
- type: bind | ||
source: $PWD/docker/compose/tool_chat_template_llama3.1_json.jinja | ||
target: /tmp/tool_chat_template.jinja | ||
bind: | ||
create_host_path: true | ||
healthcheck: | ||
test: ["CMD", "curl", "-f", "http://localhost:8000/health"] | ||
interval: 30s | ||
retries: 3 | ||
start_period: 60s | ||
timeout: 10s | ||
volumes: | ||
hugging_face_models: | ||
|
||
networks: | ||
backend_net: |
wwwehr marked this conversation as resolved.
Show resolved
Hide resolved
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='') %}{%- for message in messages %}{%- if message['role'] == 'system' %}{% set ns.system_prompt = message['content'] %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<|User|>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls']%}{%- if not ns.is_first %}{{'<|Assistant|><|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\n' + '```json' + '\n' + tool['function']['arguments'] + '\n' + '```' + '<|tool▁call▁end|>'}}{%- set ns.is_first = true -%}{%- else %}{{'\n' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\n' + '```json' + '\n' + tool['function']['arguments'] + '\n' + '```' + '<|tool▁call▁end|>'}}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none %}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + message['content'] + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{{'<|Assistant|>' + content + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'\n<|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<|Assistant|><think>\n'}}{% endif %} |
wwwehr marked this conversation as resolved.
Show resolved
Hide resolved
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='') %}{%- for message in messages %}{%- if message['role'] == 'system' %}{% set ns.system_prompt = message['content'] %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<|User|>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls']%}{%- if not ns.is_first %}{{'<|Assistant|><|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\n' + '```json' + '\n' + tool['function']['arguments'] + '\n' + '```' + '<|tool▁call▁end|>'}}{%- set ns.is_first = true -%}{%- else %}{{'\n' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\n' + '```json' + '\n' + tool['function']['arguments'] + '\n' + '```' + '<|tool▁call▁end|>'}}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none %}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + message['content'] + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{{'<|Assistant|>' + content + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'\n<|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<|Assistant|>'}}{% endif %} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,120 @@ | ||
{{- bos_token }} | ||
{%- if custom_tools is defined %} | ||
{%- set tools = custom_tools %} | ||
{%- endif %} | ||
{%- if not tools_in_user_message is defined %} | ||
{#- Llama 3.1 doesn't pass all tests if the tools are in the system prompt #} | ||
{%- set tools_in_user_message = true %} | ||
{%- endif %} | ||
{%- if not date_string is defined %} | ||
{%- if strftime_now is defined %} | ||
{%- set date_string = strftime_now("%d %b %Y") %} | ||
{%- else %} | ||
{%- set date_string = "26 Jul 2024" %} | ||
{%- endif %} | ||
{%- endif %} | ||
{%- if not tools is defined %} | ||
{%- set tools = none %} | ||
{%- endif %} | ||
|
||
{#- This block extracts the system message, so we can slot it into the right place. #} | ||
{%- if messages[0]['role'] == 'system' %} | ||
{%- if messages[0]['content'] is string %} | ||
{%- set system_message = messages[0]['content']|trim %} | ||
{%- else %} | ||
{%- set system_message = messages[0]['content'][0]['text']|trim %} | ||
{%- endif %} | ||
{%- set messages = messages[1:] %} | ||
{%- else %} | ||
{%- if tools is not none %} | ||
{%- set system_message = "You are a helpful assistant with tool calling capabilities. Only reply with a tool call if the function exists in the library provided by the user. If it doesn't exist, just reply directly in natural language. When you receive a tool call response, use the output to format an answer to the original user question." %} | ||
{%- else %} | ||
{%- set system_message = "" %} | ||
{%- endif %} | ||
{%- endif %} | ||
|
||
{#- System message #} | ||
{{- "<|start_header_id|>system<|end_header_id|>\n\n" }} | ||
{%- if tools is not none %} | ||
{{- "Environment: ipython\n" }} | ||
{%- endif %} | ||
{{- "Cutting Knowledge Date: December 2023\n" }} | ||
{{- "Today Date: " + date_string + "\n\n" }} | ||
{%- if tools is not none and not tools_in_user_message %} | ||
{{- "You have access to the following functions. To call a function, please respond with JSON for a function call. " }} | ||
{{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. ' }} | ||
{{- "Do not use variables.\n\n" }} | ||
{%- for t in tools %} | ||
{{- t | tojson(indent=4) }} | ||
{{- "\n\n" }} | ||
{%- endfor %} | ||
{%- endif %} | ||
{{- system_message }} | ||
{{- "<|eot_id|>" }} | ||
|
||
{#- Custom tools are passed in a user message with some extra guidance #} | ||
{%- if tools_in_user_message and not tools is none %} | ||
{#- Extract the first user message so we can plug it in here #} | ||
{%- if messages | length != 0 %} | ||
{%- if messages[0]['content'] is string %} | ||
{%- set first_user_message = messages[0]['content']|trim %} | ||
{%- else %} | ||
{%- set first_user_message = messages[0]['content'] | selectattr('type', 'equalto', 'text') | map(attribute='text') | map('trim') | join('\n') %} | ||
{%- endif %} | ||
{%- set messages = messages[1:] %} | ||
{%- else %} | ||
{{- raise_exception("Cannot put tools in the first user message when there's no first user message!") }} | ||
{%- endif %} | ||
{{- '<|start_header_id|>user<|end_header_id|>\n\n' -}} | ||
{{- "Given the following functions, please respond with a JSON for a function call " }} | ||
{{- "with its proper arguments that best answers the given prompt.\n\n" }} | ||
{{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. ' }} | ||
{{- "Do not use variables.\n\n" }} | ||
{%- for t in tools %} | ||
{{- t | tojson(indent=4) }} | ||
{{- "\n\n" }} | ||
{%- endfor %} | ||
{{- first_user_message + "<|eot_id|>"}} | ||
{%- endif %} | ||
|
||
{%- for message in messages %} | ||
{%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %} | ||
{{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n' }} | ||
{%- if message['content'] is string %} | ||
{{- message['content'] | trim}} | ||
{%- else %} | ||
{%- for content in message['content'] %} | ||
{%- if content['type'] == 'text' %} | ||
{{- content['text'] | trim }} | ||
{%- endif %} | ||
{%- endfor %} | ||
{%- endif %} | ||
{{- '<|eot_id|>' }} | ||
{%- elif 'tool_calls' in message %} | ||
{%- if not message.tool_calls|length == 1 %} | ||
{{- raise_exception("This model only supports single tool-calls at once!") }} | ||
{%- endif %} | ||
{%- set tool_call = message.tool_calls[0].function %} | ||
{{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}} | ||
{{- '{"name": "' + tool_call.name + '", ' }} | ||
{{- '"parameters": ' }} | ||
{{- tool_call.arguments | tojson }} | ||
{{- "}" }} | ||
{{- "<|eot_id|>" }} | ||
{%- elif message.role == "tool" or message.role == "ipython" %} | ||
{{- "<|start_header_id|>ipython<|end_header_id|>\n\n" }} | ||
{%- if message.content is string %} | ||
{{- { "output": message.content } | tojson }} | ||
{%- else %} | ||
{%- for content in message['content'] %} | ||
{%- if content['type'] == 'text' %} | ||
{{- { "output": content['text'] } | tojson }} | ||
{%- endif %} | ||
{%- endfor %} | ||
{%- endif %} | ||
{{- "<|eot_id|>" }} | ||
{%- endif %} | ||
{%- endfor %} | ||
{%- if add_generation_prompt %} | ||
{{- '<|start_header_id|>assistant<|end_header_id|>\n\n' }} | ||
{%- endif %} |
Uh oh!
There was an error while loading. Please reload this page.