Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pacemaker status non available #43

Open
phrancesco opened this issue Dec 3, 2014 · 8 comments
Open

pacemaker status non available #43

phrancesco opened this issue Dec 3, 2014 · 8 comments

Comments

@phrancesco
Copy link

I saw old bug about time out but I'm still having this issue on two different clusters, both of the with SAN disks attached.

On both instakllation I just see waiting for pacemaker.

I ussed this commands on one serv er:
351 /usr/local/bin/lcmc-gui-helper-1.7.3 get-disk-info
352 /usr/local/bin/lcmc-gui-helper-1.7.3 get-vg-info
353 /usr/local/bin/lcmc-gui-helper-1.7.3 get-filesystems-info
354 /usr/local/bin/lcmc-gui-helper-1.7.3 get-mount-point-info
355 /usr/local/bin/lcmc-gui-helper-1.7.3 installation-info
356 /usr/local/bin/lcmc-gui-helper-1.7.3 get-net-info
357 /usr/local/bin/lcmc-gui-helper-1.7.3 get-vm-info
358 /usr/local/bin/lcmc-gui-helper-1.7.3 get-cluster-events
359 /usr/local/bin/lcmc-gui-helper-1.7.3 get-drbd-events
360 /usr/local/bin/lcmc-gui-helper-1.7.3 4 hw-info

all of the (except the loops) exit in a reasoble time (< 30 sec), only this one

/usr/local/bin/lcmc-gui-helper-1.7.3 4 hw-info

unknown command: 4 at /usr/local/bin/lcmc-gui-helper-1.7.3 line 201.

exit with and error.

any hints ?

Then on the gui the button "crm shell" exit wioth this error:

bash: /usr/sbin/crm: No such file or directory

We are using Centosn 6.5 latest updates (vault repo for 6.5), cman / pacemaker and corosync as plugin.

Thanks

@rasto
Copy link
Owner

rasto commented Dec 4, 2014

there's a typo /usr/local/bin/lcmc-gui-helper-1.7.3 4 hw-info
it should be
/usr/local/bin/lcmc-gui-helper-1.7.3 hw-info

@phrancesco
Copy link
Author

ok you're right, there is a typo.

[root@xxxx ~]# time /usr/local/bin/lcmc-gui-helper-1.7.3 hw-info
real 0m0.344s
user 0m0.117s
sys 0m0.122s

I really think it's fast enough.

Apart from the typo I suppose there is even a bug, because the problem persists,
the cluster is working but LCMC states "pacemaker status non available".

Kind Regards

@xMAnton
Copy link

xMAnton commented May 9, 2015

Any news about it?

I have the same issue here. Pacemaker is running on both nodes and 'pcs status' seems ok.

[root@st01 bin]# ps auxww|grep pace
root 2182 0.0 0.0 130412 7420 ? Ss mag08 0:04 /usr/sbin/pacemakerd -f
haclust+ 2272 0.0 0.0 135928 16756 ? Ss mag08 0:04 /usr/libexec/pacemaker/cib
root 2273 0.0 0.0 133876 8064 ? Ss mag08 0:04 /usr/libexec/pacemaker/stonithd
root 2274 0.0 0.0 102856 5344 ? Ss mag08 0:05 /usr/libexec/pacemaker/lrmd
haclust+ 2275 0.0 0.0 124692 7620 ? Ss mag08 0:04 /usr/libexec/pacemaker/attrd
haclust+ 2276 0.0 0.0 114820 4536 ? Ss mag08 0:04 /usr/libexec/pacemaker/pengine
haclust+ 2277 0.0 0.0 143048 8868 ? Ss mag08 0:05 /usr/libexec/pacemaker/crmd

[root@st02 ~]# ps axuww|grep pace
root 2100 0.0 0.0 130408 7420 ? Ss mag08 0:04 /usr/sbin/pacemakerd -f
haclust+ 2186 0.0 0.0 132388 14836 ? Ss mag08 0:04 /usr/libexec/pacemaker/cib
root 2187 0.0 0.0 133876 8212 ? Ss mag08 0:04 /usr/libexec/pacemaker/stonithd
root 2188 0.0 0.0 102856 5340 ? Ss mag08 0:04 /usr/libexec/pacemaker/lrmd
haclust+ 2190 0.0 0.0 124688 7604 ? Ss mag08 0:04 /usr/libexec/pacemaker/attrd
haclust+ 2191 0.0 0.0 151480 21740 ? Ss mag08 0:04 /usr/libexec/pacemaker/pengine
haclust+ 2193 0.0 0.0 184080 10252 ? Ss mag08 0:05 /usr/libexec/pacemaker/crmd

[root@st01 bin]# pcs status
Cluster name: storage
Last updated: Sat May 9 03:04:18 2015
Last change: Sat May 9 01:53:58 2015
Stack: corosync
Current DC: st02 (2) - partition with quorum
Version: 1.1.12-a14efad
2 Nodes configured
1 Resources configured

Online: [ st01 st02 ]

Full list of resources:

ClusterIP (ocf::heartbeat:IPaddr2): Started st01

PCSD Status:
st01: Online
st02: Online

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

@rasto
Copy link
Owner

rasto commented May 12, 2015

What does the

/usr/local/bin/lcmc-gui-helper-1.7.8 get-cluster-events

say?

@marino-mrc
Copy link

I have the same issue but i cannot found lcmc-gui-helper-1.7.8 executable. How can i install it? How can i solve this issue? I have other clusters and lcmc works well. (and yes, i've already double checked configurations)

@rebell3x6
Copy link

Hi Guys,

I am facing the same problem. I have set up to Centos 7 machines. Everything is installed fine, but pacemaker status is still unavailable.

This is the output of /usr/local/bin/lcmc-gui-helper-1.7.11 get-cluster-events

 [root@node1 ~]# /usr/local/bin/lcmc-gui-helper-1.7.11 get-cluster-events
---reset---
---start---
res_status
ok

>>>res_status
cibadmin
ok
<pcmk>
<cib crm_feature_set="3.0.10" validate-with="pacemaker-2.3" epoch="15" num_updates="9" admin_epoch="0" cib-last-written="Thu Aug  4 14:43:17 2016" update-origin="node1" update-client="cibadmin" update-user="hacluster" have-quorum="1" dc-uuid="1">
  <configuration>
    <crm_config>
      <cluster_property_set id="cib-bootstrap-options">
        <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/>
        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.13-10.el7_2.2-44eb2dd"/>
        <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
        <nvpair id="cib-bootstrap-options-cluster-name" name="cluster-name" value="mycluster"/>
        <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>
        <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false"/>
      </cluster_property_set>
    </crm_config>
    <nodes>
      <node id="1" uname="node1"/>
      <node id="2" uname="node2"/>
    </nodes>
    <resources/>
    <constraints/>
    <rsc_defaults>
      <meta_attributes id="rsc_defaults-options">
        <nvpair id="rsc_defaults-options-resource-stickiness" name="resource-stickiness" value="100"/>
      </meta_attributes>
    </rsc_defaults>
  </configuration>
  <status>
    <node_state id="2" uname="node2" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
      <lrm id="2">
        <lrm_resources/>
      </lrm>
      <transient_attributes id="2">
        <instance_attributes id="status-2">
          <nvpair id="status-2-shutdown" name="shutdown" value="0"/>
          <nvpair id="status-2-probe_complete" name="probe_complete" value="true"/>
        </instance_attributes>
      </transient_attributes>
    </node_state>
    <node_state id="1" uname="node1" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
      <lrm id="1">
        <lrm_resources/>
      </lrm>
      <transient_attributes id="1">
        <instance_attributes id="status-1">
          <nvpair id="status-1-shutdown" name="shutdown" value="0"/>
          <nvpair id="status-1-probe_complete" name="probe_complete" value="true"/>
        </instance_attributes>
      </transient_attributes>
    </node_state>
  </status>
</cib>
</pcmk>
>>>cibadmin
---done---

@mackot
Copy link

mackot commented Feb 12, 2021

on centos8 with 1 interface, same problem but find solution in my case, must set host to short-hostname without domain(not FQDN) and use short-hostname as primary into hosts,pacemaker,pcs...

@jbanchic
Copy link

Debian 9. The problem of the unavailability of the Pacemaker service for hosts with long names (FQDN) is solved by rebuilding the cluster specifying FQDN instead of IP addresses.

  1. Delete resources, delete the cluster.
  2. Recreate the cluster using FQDN (important!) host names.
  3. Add hosts to LCMC using FQDN (important!) host names. Add the previously created cluster to LCMC.
  4. Connect. Enjoy full access to the cluster.
    LCMC 1.7.14/Pacemaker 1.1.24 used.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants