AIX HACMP show cluster information Resource Group / status

iKnow-IT banner

Standard cluster information
#/usr/es/sbin/cluster/utilities/clRGinfo

-----------------------------------------------------------------------------
Group Name        Group State         Node
-----------------------------------------------------------------------------
< rg name>         ONLINE                 <hostname1>
                             OFFLINE                <hostname2>

 

#/usr/es/sbin/cluster/utilities/clshowres

Resource Group Name                                               <rg name>
Participating Node Name(s)                                     <hostname1> <hostname2>
Startup Policy                                                                 Online On First Available Node
Fallover Policy                                                               Fallover To Next Priority Node In The List
Fallback Policy                                                               Never Fallback
Site Relationship                                                          ignore
Node Priority
Service IP Label                                                            <name>
Filesystems                                                                     ALL
Filesystems Consistency Check                              fsck
Filesystems Recovery Method                               sequential
Filesystems/Directories to be exported (NFSv3)
Filesystems/Directories to be exported (NFSv4)
Filesystems to be NFS mounted
Network For NFS Mount
Filesystem/Directory for NFSv4 Stable Storage
Volume Groups                                                            <vg name> <vg name2>

Concurrent Volume Groups
Use forced varyon for volume groups, if necessary false
...

# /usr/sbin/cluster/clstat -o

clstat - HACMP Cluster Status Monitor
-------------------------------------

Cluster: <cluster name> (1255127696)
Wed May 19 08:07:10 DFT 2010
State: UP Nodes: 2
SubState: UNSTABLE

Node: <hostname1> State: UP
Interface: hostname1-boot (2) Address: 10.17.113.42
State: UP
Interface: hostname1_hb01 (0) Address: 0.0.0.0
State: UP
Interface: hostname1_hb02 (1) Address: 0.0.0.0
State: UP
Resource Group: <RG1> State: On line

Node: <hostname2> State: UP
Interface: hostname2-boot (2) Address: 10.17.113.18
State: UP
Interface: hostname2_hb01 (0) Address: 0.0.0.0
State: UP
Interface: hostname2_hb02 (1) Address: 0.0.0.0
State: UP

Resource Group: <RG2> State: On line
Resource Group: <RG3> State: Releasing

$ lssrc -ls clstrmgrES
Current state: ST_STABLE
sccsid = "@(#)36    1.135.6.1 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 53haes_r610, 1135G_hacmp610 11/30/11 08:50:54"
i_local_nodeid 0, i_local_siteid -1, my_handle 2
ml_idx[2]=0     ml_idx[3]=1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 11
local node vrmf is 6107
cluster fix level is "7"
The following timer(s) are currently active:
Current DNP values
DNP Values for NodeId - 2  NodeName - <node 1>
    PgSpFree = 1487660  PvPctBusy = 0  PctTotalTimeIdle = 77.616740
DNP Values for NodeId - 3  NodeName - <node 2>
    PgSpFree = 1599646  PvPctBusy = 0  PctTotalTimeIdle = 95.570370

lssrc -ls clstrmgrES  shows if cluster is STABLE or not, cluster version, Dynamic Node Priority (pgspace free, disk busy, cpu idle)
                ST_STABLE: cluster services running with resources online
                NOT_CONFIGURED: cluster is not configured or node is not synced
                ST_INIT: cluster is configured but not active on this node
                ST_JOINING: cluster node is joining the cluster
                ST_VOTING: cluster nodes are voting to decide event execution
                ST_RP_RUNNING: cluster is running a recovery program
                RP_FAILED: recovery program event script is failed
                ST_BARRIER: clstrmgr is in between events waiting at the barrier
                ST_CBARRIER: clstrmgr is exiting a recovery program
                ST_UNSTABLE: cluster is unstable usually due to an event error

lssrc -ls topsvcs     shows the status of individual diskhb devices, heartbeat intervals, failure cycle (missed heartbeats)
lssrc -ls grpsvcs     gives info about connected clients, number of groups)
lssrc -ls emsvcs      shows the resource monitors known to the event management subsystem)
lssrc -ls snmpd       shows info about snmpd
halevel -s            shows PowerHA level (from 6.1)

# /usr/es/sbin/cluster/utilities/cltopinfo

Cluster Name: cluster1
Cluster Connection Authentication Mode: Standard
Cluster Message Authentication Mode: None
Cluster Message Encryption: None
Use Persistent Labels for Communication: No
There are 2 node(s) and 3 network(s) defined

NODE node1:
  Network net_ether_01
          cluster 192.168.53.2
          node1 192.168.53.8
          node1_s 192.168.49.2

  Network net_tmscsi_0
          tmscsi0_node1 /dev/tmscsi0

  Network net_tmscsi_1
          tmscsi1_node1 /dev/tmscsi1

NODE node2:
  Network net_ether_01
          cluster 192.168.53.2
          node2 192.168.53.9
          node2_s 192.168.59.3

  Network net_tmscsi_0
          tmscsi0_node2 /dev/tmscsi0

Network net_tmscsi_1
          tmscsi1_node2 /dev/tmscsi1

Resource Group cache
    Startup Policy        Online Using Distribution Policy
    Fallover Policy       Fallover To Next Priority Node In The List
    Fallback Policy       Never Fallback
    Participating Nodes   node1 node2
    Service IP Label      cluster

    Total Heartbeats Missed: 788

Cluster Topology Start Time: 05/25/2009 21:41:14

# clmgr -a state query cluster 

STATE="OFFLINE"

# clmgr -cv -a name,state query node

#NAME:STATE
node1:OFFLINE
node2:OFFLINE

# clmgr -a state q cluster

STATE="STABLE"

# clmgr -cv -a name,state,raw_state q node

# NAME:STATE:RAW_STATE
node1:NORMAL:ST_STABLE
node2:NORMAL:ST_STABLE

# clmgr -cv -a name,state,current_node q rg

# NAME:STATE:CURRENT_NODE
appAgroup:ONLINE:node1
appBgroup:ONLINE:node2

Move the appAgroup resource group to node2 with the command:

# clmgr mv rg appAgroup node=node2

List cluster resource settings for each resource  

/usr/es/sbin/cluster/utilities/cllsres
APPLICATIONS="<application name>"
FILESYSTEM=""
FORCED_VARYON="false"
FSCHECK_TOOL="fsck"
FS_BEFORE_IPADDR="false"
RECOVERY_METHOD="sequential"
SERVICE_LABEL="<service label>"
SSA_DISK_FENCING="false"
VG_AUTO_IMPORT="false"
VOLUME_GROUP="<volume group> <volume group> ..."
USERDEFINED_RESOURCES=""

Show by resource the start and stop scripts

/usr/es/sbin/cluster/utilities/cllsserv -h -c
#Name:Start_script:Stop_script
<application name>:/usr/sbin/cluster/local/<start_script>.sh:/usr/sbin/cluster/local/<stop_script>.sh