Replies: 6 comments 1 reply
-
Tagging a few contributors to the eventhubs sdk @conniey @joshfree @anuchandy |
Beta Was this translation helpful? Give feedback.
-
There is no consumer decompression that happens implicitly in the AMQP layer. We expose the BinaryData but the user would have to pass this data through a decompression library.
At the moment, there is no way to "transparently" pass in a decompression/compression mechanism through the AMQP client. This is a story we are looking at. @jsquire : may have additional insights. |
Beta Was this translation helpful? Give feedback.
-
No additional insights, currently. As Connie said, compression/decompression are not part of the AMQP protocol and are a coordination of client and server logic for Kafka. There's no way to fully simulate with Event Hubs, currently, as the service is not compression-aware and won't dynamically manage it when committing data to a partition or sending data to consumers. To use compression with Event Hubs, your producers would need to send compressed data, which would be stored as-is in the partition. Consumers would have to know to expect compressed data and understand how to decompress when reading. |
Beta Was this translation helpful? Give feedback.
-
@conniey @jsquire - Thanks for the quick response! I was able to publish to EventHubs via Kafka using gzip compression (similar to the article I linked in the thread starter) and was able to consume using AMQP (Eventhubs Java Client) without any additional code to decompress the data. Do we happen to know where this decompression takes place? Is it on the EventHubs server side before the Java AMQP client consumes the data? On the portal, I can see the throughput is lower when compression is enabled, so can confirm that compression is enabled |
Beta Was this translation helpful? Give feedback.
-
When you receive the event using a consumer (i.e. EventProcessorClient or EventHubConsumerClient), does AFAIK, compression is done client-side. The KafkaProducer does the G-Zip compression before publishing the bytes to the broker. See configureCompression(Compression) and RecordAccumulator in KafkaProducer. Event Hubs service sees and stores the bytes sent; it doesn't try to interpret the contents of the record published. To get a meaningful value from the received |
Beta Was this translation helpful? Give feedback.
-
@dennyac : As Connie mentioned, the Kafka client compresses data and sends to a specific Kafka-aware endpoint for Event Hubs. This Kafka compatibility layer is aware of the Kafka compression semantics and decompresses the data before passing to the Event Hubs partition to be committed. This works only because the Kafka protocol is in use. Event Hubs itself has no understanding of the compression semantics today. |
Beta Was this translation helpful? Give feedback.
-
According to this link, we have support for gzip for Kafka over Eventhubs.
That link mentions "While the feature is only supported for Apache Kafka traffic producer and consumer traffic, AMQP consumer can consume compressed Kafka traffic as decompressed messages."
Questions -
Beta Was this translation helpful? Give feedback.
All reactions