Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Targets in the same group are assigned to the same OTel #3617

Open
mike9421 opened this issue Jan 16, 2025 · 3 comments
Open

Targets in the same group are assigned to the same OTel #3617

mike9421 opened this issue Jan 16, 2025 · 3 comments
Labels
enhancement New feature or request needs triage

Comments

@mike9421
Copy link

mike9421 commented Jan 16, 2025

Component(s)

target allocator

Is your feature request related to a problem? Please describe.

otel-allocator allocates multiple container endpoints of the same pod to different OTel. After the relabel operation, different OTel will collect the same metrics. Since I used cumulativeToDelta, the final metric data will be doubled.

Describe the solution you'd like

For example, the same pod detects the following targets:

  • 10.22.14.119 --> assigned to OTel-0

  • 10.22.14.119:8000 --> assigned to OTel-1

  • 10.22.14.119:8080 --> assigned to OTel-2

The address will be changed to 10.22.14.119:8000 after the relabel operation, and all the above OTel will scrape the metric of 10.22.14.119:8000, which causes metric data errors.

Describe alternatives you've considered

After the otel - allocator performs the relabel operation, it unifies and deduplicates the items.
The selection of labels for deduplication is as follows:

  • Select the label of the target where the original __address__ completely matches the __address__ after relabeling.

  • If no such target exists, the labels of the first target will be used by default, while the metrics with the _meta prefix from the last target will overwrite the labels. This ensures consistency with the behavior of the OTel Prometheus receiver, for detail, see prometheus and prometheus receiver

The following are examples.

Image

Additional context

No response

@mike9421 mike9421 added enhancement New feature or request needs triage labels Jan 16, 2025
@swiatekm
Copy link
Contributor

If I understand correctly, you'd like the target allocator to only keep one target with the same __address__? I don't think we can do that, as it's completely valid to scrape the same target multiple times with Prometheus, attaching different labels each time. As far as I know, neither Prometheus nor prometheusreceiver do any target merging based on __address__, and the links you provided do not show this.

@mike9421
Copy link
Author

If I understand correctly, you'd like the target allocator to only keep one target with the same __address__? I don't think we can do that, as it's completely valid to scrape the same target multiple times with Prometheus, attaching different labels each time. As far as I know, neither Prometheus nor prometheusreceiver do any target merging based on __address__, and the links you provided do not show this.

Yes, I want the target allocator to retain only one target with the same __address__. And according to target sync code , I think it can be assumed that Prometheus performs deduplication for the scraped targets. An OTel instance can correctly deduplicate the metrics of the targets scraped using kubernetes_sd_configs according to Prometheus logic. However, when otel-allocator distributes targets to multiple OTel instances, since targets with the same address may be assigned to different OTel instances, OTel cannot perform deduplication.

@swiatekm
Copy link
Contributor

And according to target sync code , I think it can be assumed that Prometheus performs deduplication for the scraped targets.

The hash computed in that line depends on all the labels, so it will be different for targets with the same address, but different labels. Target Allocator does something very similar.

To my understanding, Prometheus does not work the way you describe here. Have you tested this? I'm willing to be proven wrong and change target allocator to have behavior consistent with Prometheus, if it currently does not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request needs triage
Projects
None yet
Development

No branches or pull requests

2 participants