ClusterLabs/pcs

Clvmd clone failed to start

Closed this issue · 2 comments

[root@node01:/var/log]

pcs status

Cluster name: ha_cluster
Stack: corosync
Current DC: node02 (version 1.1.23-1.el7_9.1-9acf116022) - partition with quorum
Last updated: Fri Jul 21 21:03:31 2023
Last change: Fri Jul 21 20:35:40 2023 by root via cibadmin on node01

3 nodes configured
10 resource instances configured

Online: [ node01 node02 node03 ]

Full list of resources:

scsi-shooter (stonith:fence_scsi): Started node02
Clone Set: dlm-clone [dlm]
Started: [ node01 node02 node03 ]
Clone Set: clvmd-clone [clvmd]
Stopped: [ node01 node02 node03 ]
Clone Set: fs_gfs2-clone [fs_gfs2]
Stopped: [ node01 node02 node03 ]

Failed Resource Actions:

  • clvmd_start_0 on node02 'not running' (7): call=23, status=complete, exitreason='',
    last-rc-change='Fri Jul 21 20:00:06 2023', queued=1ms, exec=1485ms
  • clvmd_start_0 on node01 'not running' (7): call=23, status=complete, exitreason='',
    last-rc-change='Fri Jul 21 20:40:50 2023', queued=0ms, exec=1713ms
  • clvmd_start_0 on node03 'not running' (7): call=37, status=complete, exitreason='',
    last-rc-change='Fri Jul 21 20:21:27 2023', queued=0ms, exec=1560ms

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

Hello @1392273211,
from the given status I can only see that there is an issue with
ocf:heartbeat:clvm resource agent not running.
This could be from various reasons.

We need more information to find out why. The best way to provide more
logs is crm_report(8) utility provided by pacemaker.

  1. Try to stop cluster.
    pcs cluster stop --all
  2. save the current time
    FROM=$(date +"%Y-%m-%d %H:%M:%S")
  3. start cluster and wait until clvmd resource fail
    pcs cluster start --all --wait
  4. check the status
    pcs status
  5. run crm_report utility
    crm_report --from "$FROM"

An pcmk-*.tar.bz2 archive with logs will be created.

Sice this is not a specific pcs issue, the best way to ask for help would be to
write to the users cluster mailing list.

Do not forget to attach an archive with logs.

I hope I have been helpful.

I'm closing this issue for inactivity. Feel free to reopen it if you have further questions.