Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
[Draft - Work in progress] Standard Trace Query API
This change suggest a standard API definition to query traces stored in any tracing backend platform (generally speaking, observability backends), both open source (Jaeger, Tempo, Zipkin) and proprietary (Datadog, appdynamics, etc.)
Motivation
The Observability space is getting mature, more and more telemetry producers try to align with the OpenTelemetry standards to produce telemetry signals.
On the other side, telemetry consumers platforms (platforms that uses traces to enrich and correlate existing events. Kiali [1]), need to implement N backend (each for each observability backend integrated to their platform) APIs to consume and process those telemetry signals.
Sooner than later, at the same rhythm that the OpenTelemetry client side pipeline (instrument, collect, export) adoption grows, the need for a standard backend to consume these telemetry signals will grow as well.
OTel collector natively supports exporting telemetry signals to multiple exporters simultaneously hence to multiple observability backends.
This idea of "having a standard for trace query API " has also been discussed by @lucasponce and others at length in #193.
Comments, ideas, feedback, etc. are all very welcome and highly appreciated!