Skip to content

[FEATURE] Cloud native masters #647

@szbr9486

Description

@szbr9486

Is your feature request related to a problem? Please describe.

The current master node group employs a combination of the Raft consensus algorithm and Journal Node components to ensure data consistency. Metadata is persisted locally on the primary nodes using RocksDB. However, this architecture presents the following issues:

Architectural Complexity: The solution introduces dependencies on both a specific Raft protocol implementation and dedicated Journal Node components.

Lack of Elastic Scalability: The number of master nodes must be predetermined during the initial cluster deployment and cannot be dynamically adjusted thereafter, preventing elastic scaling operations.

Describe the solution you'd like
The objective is to refactor the data synchronization mechanism between master nodes into a cloud-native model. This involves utilizing cloud-based object storage as a shared data foundation to facilitate backup and data transfer between the primary and standby masters. The core concepts are as follows:

Metadata Persistence: Master nodes will continue to use RocksDB for local storage of metadata.

Durability via Object Storage: The primary master will periodically persist snapshots of its RocksDB data and its operational write-ahead logs (WAL) to the designated object storage.

State Synchronization: Standby masters will maintain data consistency by sequentially reading and applying the RocksDB snapshots and subsequent WAL entries from the shared object storage.

By implementing the architecture described above, the master node layer achieves full elasticity and dynamic scalability.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions