Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/zookeeper] The data was lost when zookeeper expanded from 1 replicas to 3 replicas #32542

Closed
fured opened this issue Mar 21, 2025 · 4 comments
Assignees
Labels
solved tech-issues The user has a technical issue about an application zookeeper

Comments

@fured
Copy link

fured commented Mar 21, 2025

Name and Version

latest

What architecture are you using?

amd64

What steps will reproduce the bug?

  1. Install zookeeper with replicaCount = 1
  2. Add data to zookeeper
  3. helm upgrade .... --set replicaCount=3
  4. Check previously added data
    Result: The previously added data was lost and a new cluster is created

Are you using any custom parameters or values?

No response

What is the expected behavior?

No response

What do you see instead?

nothing

Additional information

No response

@fured fured added the tech-issues The user has a technical issue about an application label Mar 21, 2025
@github-actions github-actions bot added the triage Triage is needed label Mar 21, 2025
@javsalgar javsalgar changed the title The data was lost when zookeeper expanded from 1 replicas to 3 replicas [bitnami/zookeeper] The data was lost when zookeeper expanded from 1 replicas to 3 replicas Mar 24, 2025
@github-actions github-actions bot removed the triage Triage is needed label Mar 24, 2025
@github-actions github-actions bot assigned fmulero and unassigned javsalgar Mar 24, 2025
@fmulero
Copy link
Collaborator

fmulero commented Mar 26, 2025

HI @fured, thanks for using bitnami charts.

I've just reproduced your issue (upgrading the cluster from 1 to 3 replicas) but I think is related to quorum configuration (not sure). Running the upgrade trying to keep the quorum, no data is lost. Here are the steps I followed:

$ helm install zk bitnami/zookeeper
...
$ kubectl exec -it zk-zookeeper-0 -it -- zkCli.sh 
/opt/bitnami/java/bin/java
Connecting to localhost:2181
Welcome to ZooKeeper!
JLine support is enabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null zxid: -1
[zk: localhost:2181(CONNECTED) 0] create /fmulero_test mydata
Created /fmulero_test
[zk: localhost:2181(CONNECTED) 1] get /fmulero_test
mydata
...
$ helm upgrade zk bitnami/zookeeper --set replicaCount 2
...
$ kubectl exec -it zk-zookeeper-1 -it -- zkCli.sh 
/opt/bitnami/java/bin/java
Connecting to localhost:2181
Welcome to ZooKeeper!
JLine support is enabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null zxid: -1
[zk: localhost:2181(CONNECTED) 0] get /fmulero_test
mydata
...
$ helm upgrade zk bitnami/zookeeper --set replicaCount 4
...
$ kubectl exec -it zk-zookeeper-3 -it -- zkCli.sh 
/opt/bitnami/java/bin/java
Connecting to localhost:2181
Welcome to ZooKeeper!
JLine support is enabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null zxid: -1
[zk: localhost:2181(CONNECTED) 0] get /fmulero_test
mydata

Please check Zookeeper documentation for more information

@fured
Copy link
Author

fured commented Mar 27, 2025

Hi @fmulero , thanks for your replies.

zookeeper scaling from 1 to 2 to 3 replicas is fine, but scaling directly from 1 to 3 replicas loses data.
Here are the steps I followed:

$ helm install zk zookeeper-13.7.4.tgz --set persistence.storageClass="local-storage"
$ kubectl exec zk-zookeeper-0 -it -- zkCli.sh
/opt/bitnami/java/bin/java
Connecting to localhost:2181
Welcome to ZooKeeper!
JLine support is enabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null zxid: -1
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 1] create /fured-test mydata
Created /fured-test
[zk: localhost:2181(CONNECTED) 2] get /fured-test
mydata
[zk: localhost:2181(CONNECTED) 3] get /zookeeper/config

[zk: localhost:2181(CONNECTED) 4] quit

$ helm upgrade zk zookeeper-13.7.4.tgz --set persistence.storageClass="local-storage" --set replicaCount=3
$ kubectl exec zk-zookeeper-0 -it -- zkCli.sh
/opt/bitnami/java/bin/java
Connecting to localhost:2181
Welcome to ZooKeeper!
JLine support is enabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null zxid: -1
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 1] get /fured-test
Node does not exist: /fured-test
[zk: localhost:2181(CONNECTED) 2] get /fured-test
Node does not exist: /fured-test
[zk: localhost:2181(CONNECTED) 3] get /zookeeper/config
server.1=zk-zookeeper-0.zk-zookeeper-headless.zk-new-3.svc.cluster.local:2888:3888:participant;0.0.0.0:2181
server.2=zk-zookeeper-1.zk-zookeeper-headless.zk-new-3.svc.cluster.local:2888:3888:participant;0.0.0.0:2181
server.3=zk-zookeeper-2.zk-zookeeper-headless.zk-new-3.svc.cluster.local:2888:3888:participant;0.0.0.0:2181
version=0
[zk: localhost:2181(CONNECTED) 4] quit

@fmulero
Copy link
Collaborator

fmulero commented Mar 28, 2025

Thanks @fured.

As I mentioned I could reproduce your issue, but I think it could be related to cluster quorum, because you can pass from 1 to 2, from 2 to 4 (keeping the quorum) without loosing data. Please take a look to Zookeeper documentation and amend me if I am wrong.

@fured
Copy link
Author

fured commented Mar 31, 2025

Thanks @fmulero

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved tech-issues The user has a technical issue about an application zookeeper
Projects
None yet
Development

No branches or pull requests

3 participants