Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,9 @@ buildNumber.properties
.settings/
.project
.classpath
.metals/
.bsp/
.bazelbsp/

# OS
.DS_Store
Expand Down
53 changes: 53 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,58 @@
# Version changelog

## Release v0.2.0

#### Native Rust Backend (JNI Migration)
- The SDK now uses JNI (Java Native Interface) to call the Zerobus Rust SDK instead of pure Java gRPC calls
- Native library is automatically loaded from the classpath or system library path

#### New APIs

**Offset-Based Ingestion API** - Preferred alternative to CompletableFuture-based API:
- `ZerobusStream.ingestRecordOffset(IngestableRecord)` - Returns offset immediately without future allocation
- `ZerobusStream.ingestRecordsOffset(Iterable)` - Batch ingestion returning `Optional<Long>` (empty for empty batch)
- `ZerobusStream.waitForOffset(long)` - Block until specific offset is acknowledged

**JSON Record Support**:
- `IngestableRecord` interface - Unified interface for all record types
- `JsonRecord` class - JSON string wrapper implementing IngestableRecord
- `ProtoRecord<T>` class - Protocol Buffer wrapper implementing IngestableRecord
- `RecordType` enum - Specifies stream serialization format (`PROTO` or `JSON`)
- `StreamConfigurationOptions.setRecordType(RecordType)` - Configure stream for JSON or Proto records
- Both record types work with `ingestRecord()` and `ingestRecordOffset()` methods

**Batch Operations**:
- `ZerobusStream.ingestRecords(Iterable)` - Ingest multiple records with single acknowledgment
- `ZerobusStream.getUnackedBatches()` - Get unacknowledged records preserving batch grouping
- `EncodedBatch` class - Represents a batch of encoded records

**Arrow Flight Support** (Experimental):
- `ZerobusArrowStream` class - High-performance columnar data ingestion
- `ArrowTableProperties` class - Table configuration with Arrow schema
- `ArrowStreamConfigurationOptions` class - Arrow stream configuration
- `ZerobusSdk.createArrowStream()` - Create Arrow Flight streams
- `ZerobusSdk.recreateArrowStream()` - Recover failed Arrow streams

**New Callback Interface**:
- `AckCallback` interface with `onAck(long offsetId)` and `onError(long offsetId, String message)`
- More detailed error information than the deprecated Consumer-based callback

### Deprecated

- `ZerobusStream.ingestRecord(RecordType)` - Use `ingestRecordOffset()` instead. The offset-based API avoids CompletableFuture allocation overhead.
- `ZerobusStream.ingestRecord(IngestableRecord)` - Use `ingestRecordOffset()` instead.
- `ZerobusStream.ingestRecords(Iterable)` - Use `ingestRecordsOffset()` instead.
- `ZerobusStream.getState()` - Stream state is no longer exposed by the native backend. Returns `OPENED` or `CLOSED` only.
- `ZerobusStream.getUnackedRecords()` - Returns empty iterator. Use `getUnackedBatches()` or `getUnackedRecordsRaw()` instead.
- `StreamConfigurationOptions.Builder.setAckCallback(Consumer<IngestRecordResponse>)` - Use `setAckCallback(AckCallback)` instead.
- `ZerobusSdk.setStubFactory()` - gRPC stub factory is no longer used with native backend. Throws `UnsupportedOperationException`.

### Platform Support

- Linux x86_64: Supported
- Windows x86_64: Supported
- macOS: Not yet supported (planned for future release)

## Release v0.1.0

Initial release of the Databricks Zerobus Ingest SDK for Java.
Expand Down
27 changes: 1 addition & 26 deletions NEXT_CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,38 +1,13 @@
# NEXT CHANGELOG

## Release v0.2.0
## Release v0.3.0

### New Features and Improvements

- Updated Protocol Buffers from 3.24.0 to 4.33.0 for improved performance and latest features
- Updated gRPC dependencies from 1.58.0 to 1.76.0 for enhanced stability and security
- Updated SLF4J logging framework from 1.7.36 to 2.0.17 for modern logging capabilities

### Bug Fixes

### Documentation

- Updated README.md with new dependency versions
- Updated protoc compiler version recommendations
- Updated Logback version compatibility for SLF4J 2.0

### Internal Changes

- Updated maven-compiler-plugin from 3.11.0 to 3.14.1
- All gRPC artifacts now consistently use version 1.76.0

### API Changes

**Breaking Changes**

- **Protocol Buffers 4.x Migration**: If you use the regular JAR (not the fat JAR), you must upgrade to protobuf-java 4.33.0 and regenerate any custom `.proto` files using protoc 4.x
- Download protoc 4.33.0 from: https://github.com/protocolbuffers/protobuf/releases/tag/v33.0
- Regenerate proto files: `protoc --java_out=src/main/java src/main/proto/record.proto`
- Protobuf 4.x is binary-compatible over the wire with 3.x, but generated Java code may differ

- **SLF4J 2.0 Migration**: If you use a logging implementation, you may need to update it:
- `slf4j-simple`: Use version 2.0.17 or later
- `logback-classic`: Use version 1.4.14 or later (for SLF4J 2.0 compatibility)
- `log4j-slf4j-impl`: Use version 2.20.0 or later

**Note**: If you use the fat JAR (`jar-with-dependencies`), all dependencies are bundled and no action is required.
Loading