-
Notifications
You must be signed in to change notification settings - Fork 135
[pod logs] Use the K8s deployment/cronjob/job
name in the service_name
logic
#1540
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
There may be a hack looking at
|
We could do this if controller is DaemonSet or StatefulSet. If the controller is "ReplicaSet" or "Job", we can't really make the assumption that it's from a RS or Deployment, or from a Job or CronJob. It's too ambiguous... Perhaps a better, long term idea would be to use something like this: |
Uh oh!
There was an error while loading. Please reload this page.
We want to align the
service_name
logic of the K8s Monitoring Helm Chart Pod Logs feature on the logic used by the OpenTelemetry Operator.For this, we need to retrieve the
deployment/cronjob/job
of the pods through the "controller-by" attribute.If the OpenTelemetry Collector K8s Processor is capable of retrieving these metadata, it's not clear if Alloy's
discovery.relabel
can also do it.OpenTelemetry Operator logic for
service.name
First value found:
pod.annotation[resource.opentelemetry.io/service.name]
if (config[useLabelsForResourceAttributes]) pod.label[app.kubernetes.io/name]
k8s.deployment.name
k8s.replicaset.name
k8s.statefulset.name
k8s.daemonset.name
k8s.cronjob.name
k8s.job.name
k8s.pod.name
k8s.container.name
See also
service_name
preference ofcontainer.name
vsk8s.pod.name
#1533The text was updated successfully, but these errors were encountered: