You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/data_streams/_index.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,16 +51,16 @@ Data Streams Monitoring provides a standardized method for teams to understand a
51
51
52
52
Data Streams Monitoring instruments Kafka _clients_ (consumers/producers). If you can instrument your client infrastructure, you can use Data Streams Monitoring.
53
53
54
-
|| Java | Python | .NET | Node.js | Go |
55
-
| - | ---- | ------ | ---- | ------- | -- |
56
-
| Apache Kafka <br/>(self-hosted, Amazon MSK, Confluent Cloud, or any other hosting platform) | {{< X >}} | {{< X >}} | {{< X >}} | {{< X >}} | {{< X >}} |
57
-
| Amazon Kinesis | {{< X >}} | {{< X >}} | {{< X >}} | {{< X >}} ||
58
-
| Amazon SNS | {{< X >}} | {{< X >}} | {{< X >}} | {{< X >}} ||
59
-
| Amazon SQS | {{< X >}} | {{< X >}} | {{< X >}} | {{< X >}} ||
60
-
| Azure Service Bus ||| {{< X >}} |||
61
-
| Google Pub/Sub | {{< X >}} ||| {{< X >}} ||
62
-
| IBM MQ ||| {{< X >}} |||
63
-
| RabbitMQ | {{< X >}} | {{< X >}} | {{< X >}} | {{< X >}} ||
| Apache Kafka <br/>(self-hosted, Amazon MSK, Confluent Cloud, or any other hosting platform) | {{< X >}} | {{< X >}} | {{< X >}} | {{< X >}} | {{< X >}} | {{< X >}} |
57
+
| Amazon Kinesis | {{< X >}} | {{< X >}} | {{< X >}} | {{< X >}} |||
58
+
| Amazon SNS | {{< X >}} | {{< X >}} | {{< X >}} | {{< X >}} |||
59
+
| Amazon SQS | {{< X >}} | {{< X >}} | {{< X >}} | {{< X >}} |||
60
+
| Azure Service Bus ||| {{< X >}} ||||
61
+
| Google Pub/Sub | {{< X >}} ||| {{< X >}} |||
62
+
| IBM MQ ||| {{< X >}} ||||
63
+
| RabbitMQ | {{< X >}} | {{< X >}} | {{< X >}} | {{< X >}} |||
64
64
65
65
Data Streams Monitoring requires minimum Datadog tracer versions. See each setup page for details.
Data Streams Monitoring (DSM) propagates context through message headers. Use manual instrumentation to set up DSM if you are using:
16
-
- a message queue technology that is not supported by DSM
17
-
- a message queue technology without headers, such as Kinesis
18
-
- Lambdas in .NET or Java
15
+
Data Streams Monitoring (DSM) tracks how data flows through queues and services. If your message system is **not automatically supported** (for example, your queue technology and language is not instrumented or the library you use in the language isn't automatically instrumented), you must **manually record checkpoints** so DSM can connect producers and consumers.
19
16
20
-
### Manual instrumentation installation
17
+
-**Produce checkpoint**: records when a message is published, injects DSM context into the message.
18
+
-**Consume checkpoint**: records when a message is received, extracting the DSM context if it exists, and prepares future produce checkpoints to carry that context forward.
21
19
22
-
1. Ensure you're using the [Datadog Agent v7.34.0 or later][1].
20
+
**The DSM context must travel _with the message_**. If your system supports headers, store it there. Otherwise, embed it directly in the payload.
21
+
22
+
### Manual instrumentation installation
23
+
Ensure you're using the [Datadog Agent v7.34.0 or later][1].
23
24
24
-
2. On services sending or consuming messages, declare the supported types. For example:
25
-
{{< code-block lang="text" >}}
26
-
kinesis, kafka, rabbitmq, sqs, sns, servicebus
27
-
{{< /code-block >}}
28
25
29
-
3. Call the Data Streams Monitoring checkpoints when messages are produced and when they are consumed, as shown in the example code below:
-**queueType**: message system (for example `kafka`, `rabbitmq`, `sqs`, `sns`, `kinesis`, `servicebus`). Recognized strings surface system-specific DSM metrics; other strings are allowed.
32
+
-**name**: queue, topic, or subscription name.
33
+
-**carrier**: an implementation of `DataStreamsContextCarrier`. This is where DSM context is **stored** with the message (typically a headers map, but could be payload fields if no headers exist).
-**carrier**: an implementation of `DataStreamsContextCarrier`. This is where DSM context is **retrieved** from the message.
39
+
40
+
-**Note**: This checkpoint does two things: it links the current message to the data stream, and it prepares this consumer to automatically pass the context to any messages it produces next.
41
+
42
+
---
43
+
44
+
## Examples in context (single block)
45
+
32
46
{{< code-block lang="java" >}}
33
47
import datadog.trace.api.experimental.*;
48
+
import java.util.*;
49
+
50
+
// ==========================
51
+
// producer-service.java
52
+
// ==========================
53
+
public class ProducerService {
54
+
private final Channel channel; // your MQ/Kafka/etc. client
55
+
56
+
public ProducerService(Channel channel) {
57
+
this.channel = channel;
58
+
}
59
+
60
+
public void publishOrder(Order order) {
61
+
Headers headers = new Headers(); // your header structure
62
+
Carrier headersAdapter = new Carrier(headers);
63
+
64
+
// Mark DSM produce checkpoint right before sending the message.
-**queueType**: message system (for example `rabbitmq`, `kafka`, `sqs`, `sns`, `kinesis`, `servicebus`). Recognized strings surface system-specific DSM metrics; other strings are allowed.
124
+
-**name**: queue, topic, or subscription name.
125
+
-**carrier**: writeable key/value container to **store** DSM context with the message (headers object if supported; otherwise add fields to the payload).
-**queueType**: same value used by the producer (recognized strings preferred, other strings allowed).
129
+
-**name**: same queue, topic, or subscription name.
130
+
-**carrier**: readable key/value container to **retrieve** DSM context from the message (headers object if supported; otherwise the parsed payload).
131
+
132
+
**Note**: This checkpoint does two things: it links the current message to the data stream, and it prepares this consumer to automatically pass the context to any messages it produces next.
133
+
134
+
## Examples in context (single block)
135
+
136
+
{{< code-block lang="js" >}}
137
+
// ==========================
138
+
// producer-service.js
139
+
// ==========================
140
+
const tracer = require('dd-trace').init({}) // init in the producer service
141
+
142
+
async function publishOrder(order, channel) {
143
+
// Use headers if supported; otherwise embed context in the payload.
144
+
const headers = {}
145
+
146
+
// Mark DSM produce checkpoint right before sending the message.
-**getter**: a callable `(key)` used to **retrieve** DSM context from the message.
194
+
- If headers are supported: use `headers.get`.
195
+
- If not: use a function that reads from the payload.
196
+
197
+
**Note**: This checkpoint does two things: it links the current message to the data stream, and it prepares this consumer to automatically pass the context to any messages it produces next.
198
+
199
+
---
200
+
201
+
## Examples in context (single block)
202
+
82
203
{{< code-block lang="python" >}}
83
-
from ddtrace.data_streams import set_consume_checkpoint
204
+
# ==========================
205
+
# producer_service.py
206
+
# ==========================
84
207
from ddtrace.data_streams import set_produce_checkpoint
-**queue_type**: the message system (for example `rabbitmq`, `kafka`, `sqs`, `sns`, `kinesis`, `servicebus`). Using a recognized queue_type helps surface metrics related to that system in Data Streams, but other strings are allowed if needed.
248
+
-**name**: the queue, topic, or subscription name.
249
+
-**block**: yields `(key, pathway_context)`. Your block must *store* the DSM context with the message, under the given key
-**queue_type**: same message system as the producer. Using a recognized queue_type helps surface metrics related to that system in Data Streams, but other strings are also allowed.
255
+
-**name**: same queue, topic, or subscription name.
256
+
-**block**: yields `(key)`. Your block must *retrieve* the DSM context from the message.
257
+
- Whichever method (header or message body), the message was produced with
258
+
259
+
**Note**: This checkpoint does two things: it links the current message to the data stream, and it prepares this consumer to automatically pass the context to any messages it produces next.
260
+
261
+
---
262
+
263
+
## Examples in context
264
+
265
+
{{< code-block lang="ruby" >}}
266
+
# Producer side
267
+
268
+
def publish_order(order)
269
+
headers = {}
270
+
271
+
# Mark DSM produce checkpoint before sending the message.
272
+
Datadog::DataStreams.set_produce_checkpoint("rabbitmq", "orders") do |key, pathway_context|
273
+
# Store DSM context in the message
274
+
# - If headers supported: headers[key] = pathway_context
0 commit comments