-
Notifications
You must be signed in to change notification settings - Fork 105
feat: make model metrics endpoints configurable #1000
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: nayihz The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
✅ Deploy Preview for gateway-api-inference-extension ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
I have some doubts about adding additional fields to InferencePool. cc @kfswain @ahg-g @robscott @danehans @elevran /hold for others to comment. |
ffad486
to
f57478d
Compare
@nirrozenbaum , thanks for you advise. I think you are right. |
@nayihz I would start with command-line args with default values (the existing ones). |
f57478d
to
9fa17f7
Compare
@nayihz @nirrozenbaum this introduces a fixed endpoint for all model servers in the pool. apiVersion: v1
kind: Pod
metadata:
name: my-app
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "8080" |
@elevran we already have a fixed endpoint, so this PR is not introducing it :). the intention was to make that endpoint configurable. |
/unhold |
9fa17f7
to
7466a28
Compare
cmd/epp/runner/runner.go
Outdated
@@ -110,6 +110,9 @@ var ( | |||
"vllm:lora_requests_info", | |||
"Prometheus metric for the LoRA info metrics (must be in vLLM label format).") | |||
|
|||
modelServerMetricsPort = flag.Int("modelServerMetricsPort", 0, "Port to scrape metrics from pods") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
metrics port default value is 0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default metrics port should be equal to pool.Spec.TargetPortNumber
if user did not specify a port number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please explain that it defaults to the target pool number on pool?
btw, perhaps those two parameters are candidates to be added to the InferencePool API, but we should leave this for later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't add them to the InfPool API cause it implies that all pods in the pool are necessarily exposing metrics on the same path + port.
while that's what is done today in the reference implementation, the current API doesn't restrict that.
a different implementation may have each pod specify it's metrics path and port using Prometheus format on the pod as annotations and the EPP may store this info when reconciling the pod and use it for scraping.
leaving this outside of the CRD keeps this more flexible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean having it as a flag is just as restrictive as having it on the pool, in fact, if we ever make epp multi-pool, then it is more restrictive, but agree that this is certainly not very important to get into now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
implies that all pods in the pool are necessarily exposing metrics on the same path + port.
Similar assumptions are often made. We make the assumption that the serving port is the same across the pool also.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please explain that it defaults to the target pool number on pool?
Added some explanations to the usage string of -modelServerMetricsPort
flag.
pkg/epp/backend/metrics/metrics.go
Outdated
// TODO(https://github.com/kubernetes-sigs/gateway-api-inference-extension/issues/16): Consume this from InferencePool config. | ||
url := "http://" + pod.Address + ":" + strconv.Itoa(int(port)) + "/metrics" | ||
|
||
func (p *PodMetricsClientImpl) FetchMetrics(ctx context.Context, pod *backend.Pod, existing *MetricsState, url string) (*MetricsState, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems to me that GetMetricEndpoint
and FetchMetrics
calls always come together (one must calculate the url before calling FetchMetrics
).
does it make sense to make GetMetricEndpoint
function private, remove the url arg from the FetchMetrics
function, and get the url inside FetchMetrics (for example in the first line of the function)?
(and then GetMetricEndpoint function can also be removed from the interface)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
528d53f
to
937f686
Compare
937f686
to
d86effa
Compare
fix: #16