-
Notifications
You must be signed in to change notification settings - Fork 591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak of spans with grpc server stats handler #6446
Comments
cc @dashpole as code owner. |
Updated the title and description. Looks like you are using the stats handler, rather than the interceptor. |
Is there a workaround we can apply to deactivate or not trace the stats handler? |
I noticed something similar by just start using it with sentry. (We where using pure otlp with grpc collector - jaeger). Since I started sentry and added: import (
....
sentryotel "github.com/getsentry/sentry-go/otel"
....
)
....
tp := oteltrace.NewTracerProvider(
oteltrace.WithSpanProcessor(sentryotel.NewSentrySpanProcessor()),
)
// set the global tracer.
otel.SetTracerProvider(tp)
otel.SetTextMapPropagator(sentryotel.NewSentryPropagator())
.... Also I noticed it going up even without any call (we have health check call, that I filtered out) The leak seems to start. Not sure where to fix it, maybe sentry is missing something? Also reported in: getsentry/sentry-go#959 The memory profile (objects): |
I did some more investigation and it's on filtered calls. Is start a span, but never ends. (using sentry processor) |
Did this changes: https://github.com/vianamjr/opentelemetry-go-contrib/pull/1/files |
Description
See open-telemetry/opentelemetry-collector#11875 for original filing.
The gRPC server stats handler set by the OTLP receiver exposes a leak of spans over a long period of time.
From review of the code, it is possible the spans are not always closed by the stats_handler.go.
The
span.End()
call is made upon receiving the *stats.End event:opentelemetry-go-contrib/instrumentation/google.golang.org/grpc/otelgrpc/stats_handler.go
Line 205 in eb2064c
It seems in light of the leak, given the pprof dumps provided in the original report, that the event is not always received.
The text was updated successfully, but these errors were encountered: