Skip to content

feat: make model metrics endpoints configurable #1000

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

nayihz
Copy link
Contributor

@nayihz nayihz commented Jun 17, 2025

fix: #16

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jun 17, 2025
@k8s-ci-robot k8s-ci-robot requested a review from Jeffwan June 17, 2025 10:10
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: nayihz
Once this PR has been reviewed and has the lgtm label, please assign danehans for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Jun 17, 2025
Copy link

netlify bot commented Jun 17, 2025

Deploy Preview for gateway-api-inference-extension ready!

Name Link
🔨 Latest commit d86effa
🔍 Latest deploy log https://app.netlify.com/projects/gateway-api-inference-extension/deploys/6854b9cdb5b6cb000878ff17
😎 Deploy Preview https://deploy-preview-1000--gateway-api-inference-extension.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@nirrozenbaum
Copy link
Contributor

nirrozenbaum commented Jun 17, 2025

I have some doubts about adding additional fields to InferencePool.
there are alternative ways to achieve the same goal, like using command line args.
changes to CRDs should be discussed and get broad agreement using proposals.

cc @kfswain @ahg-g @robscott @danehans @elevran

/hold for others to comment.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 17, 2025
@nayihz nayihz force-pushed the feat_config_metric branch from ffad486 to f57478d Compare June 17, 2025 10:34
@nayihz
Copy link
Contributor Author

nayihz commented Jun 17, 2025

@nirrozenbaum , thanks for you advise. I think you are right.
Command line args or environment variable, which way do you prefer?

@nirrozenbaum
Copy link
Contributor

@nirrozenbaum , thanks for you advise. I think you are right. Command line args or environment variable, which way do you prefer?

@nayihz I would start with command-line args with default values (the existing ones).
we can always iterate if needed.

@nayihz nayihz force-pushed the feat_config_metric branch from f57478d to 9fa17f7 Compare June 17, 2025 12:40
@elevran
Copy link
Contributor

elevran commented Jun 17, 2025

@nayihz @nirrozenbaum this introduces a fixed endpoint for all model servers in the pool.
Would it make sense to use the Prometheus-formatted annotations as the source of truth when present and fallback to the configuration when missing?
For example:

apiVersion: v1
kind: Pod
metadata:
  name: my-app
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/path: "/metrics"
    prometheus.io/port: "8080"

@nirrozenbaum
Copy link
Contributor

nirrozenbaum commented Jun 17, 2025

@nayihz @nirrozenbaum this introduces a fixed endpoint for all model servers in the pool. Would it make sense to use the Prometheus-formatted annotations as the source of truth when present and fallback to the configuration when missing? For example:

apiVersion: v1
kind: Pod
metadata:
  name: my-app
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/path: "/metrics"
    prometheus.io/port: "8080”

@elevran we already have a fixed endpoint, so this PR is not introducing it :). the intention was to make that endpoint configurable.
but your suggestion makes sense as an improvement.

@nirrozenbaum
Copy link
Contributor

/unhold

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 17, 2025
@nayihz nayihz force-pushed the feat_config_metric branch from 9fa17f7 to 7466a28 Compare June 18, 2025 01:34
@@ -110,6 +110,9 @@ var (
"vllm:lora_requests_info",
"Prometheus metric for the LoRA info metrics (must be in vLLM label format).")

modelServerMetricsPort = flag.Int("modelServerMetricsPort", 0, "Port to scrape metrics from pods")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

metrics port default value is 0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

default metrics port should be equal to pool.Spec.TargetPortNumberif user did not specify a port number.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please explain that it defaults to the target pool number on pool?

btw, perhaps those two parameters are candidates to be added to the InferencePool API, but we should leave this for later.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't add them to the InfPool API cause it implies that all pods in the pool are necessarily exposing metrics on the same path + port.
while that's what is done today in the reference implementation, the current API doesn't restrict that.
a different implementation may have each pod specify it's metrics path and port using Prometheus format on the pod as annotations and the EPP may store this info when reconciling the pod and use it for scraping.

leaving this outside of the CRD keeps this more flexible.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean having it as a flag is just as restrictive as having it on the pool, in fact, if we ever make epp multi-pool, then it is more restrictive, but agree that this is certainly not very important to get into now.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

implies that all pods in the pool are necessarily exposing metrics on the same path + port.

Similar assumptions are often made. We make the assumption that the serving port is the same across the pool also.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please explain that it defaults to the target pool number on pool?

Added some explanations to the usage string of -modelServerMetricsPort flag.

// TODO(https://github.com/kubernetes-sigs/gateway-api-inference-extension/issues/16): Consume this from InferencePool config.
url := "http://" + pod.Address + ":" + strconv.Itoa(int(port)) + "/metrics"

func (p *PodMetricsClientImpl) FetchMetrics(ctx context.Context, pod *backend.Pod, existing *MetricsState, url string) (*MetricsState, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems to me that GetMetricEndpoint and FetchMetrics calls always come together (one must calculate the url before calling FetchMetrics).
does it make sense to make GetMetricEndpoint function private, remove the url arg from the FetchMetrics function, and get the url inside FetchMetrics (for example in the first line of the function)?
(and then GetMetricEndpoint function can also be removed from the interface)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@nayihz nayihz force-pushed the feat_config_metric branch 2 times, most recently from 528d53f to 937f686 Compare June 19, 2025 13:33
@nayihz nayihz force-pushed the feat_config_metric branch from 937f686 to d86effa Compare June 20, 2025 01:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Expose baseline algorithm parameters as configurable
7 participants