You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 16, 2025. It is now read-only.
Suppose we have two parallel ganesha heads. One ganesha is then shut down, by initiating a shutdown via dbus or by sending it a SIGTERM. Eventually, ctdb notices this and migrates the IP addr and puts the other nodes into grace. This is racy.
In either case, ganesha will tear down all of the state held during shutdown. That includes any file locks held. By the time that we notice that a ganesha has gone down, any state held by that ganesha will have already been released. This opens a window where clients of other nodes in the cluster can race in and grab that state.
The text was updated successfully, but these errors were encountered:
Note that ganesha now has a new prepare_shutdown export operation that we're using with FSAL_CEPH to avoid tearing down state on the MDS on a clean shutdown. You may want to add something similar to FSAL_GLUSTER.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Suppose we have two parallel ganesha heads. One ganesha is then shut down, by initiating a shutdown via dbus or by sending it a SIGTERM. Eventually, ctdb notices this and migrates the IP addr and puts the other nodes into grace. This is racy.
In either case, ganesha will tear down all of the state held during shutdown. That includes any file locks held. By the time that we notice that a ganesha has gone down, any state held by that ganesha will have already been released. This opens a window where clients of other nodes in the cluster can race in and grab that state.
The text was updated successfully, but these errors were encountered: