Pacemaker - 2 node cluster 구성하기
fence-xvm
을 사용하지 않고 vBMC
통해 실제 많이 작업하는 구성인 ipmilan
환경을 libvirt
위에 구현했습니다.
참고하여 진행하시기 바랍니다.
ENV Version 정보
- CentOS Linux release 7.9.2009 (Core)
- pcs-0.9.169-3.el7.centos.1.x86_64
vBMC 구성
pacemaker vbmc (host)
# vbmc add --username admin --password testtest --port 6161 --libvirt-uri qemu:///system virt-go-c79-161 # vbmc add --username admin --password testtest --port 6162 --libvirt-uri qemu:///system virt-go-c79-162 # vbmc start virt-go-c79-161 # vbmc start virt-go-c79-162
vbmc list (host)
# vbmc list +-----------------+---------+---------+------+ | Domain name | Status | Address | Port | +-----------------+---------+---------+------+ | virt-go-c79-161 | running | :: | 6161 | | virt-go-c79-162 | running | :: | 6162 | +-----------------+---------+---------+------+
Pacemaker Cluster 구성
/etc/hosts
(both)
# cat << EOF >> /etc/hosts 192.168.123.161 node1 192.168.123.162 node2 EOF
- [option]
/etc/hosts
(both) if use rrp
# cat << EOF >> /etc/hosts 192.168.123.161 node1 192.168.123.162 node2 192.168.123.163 node1-rrp 192.168.123.164 node2-rrp EOF
- install package (both)
# yum install -y pcs fence-agents-ipmilan vim
- start pcsd service for authentication (both)
# systemctl enable pcsd --now
- Set cluster user password (both)
# echo "testtest" |passwd --stdin hacluster
- Auth cluster
# pcs cluster auth node1 node2 -u hacluster -p testtest node1: Authorized node2: Authorized
- [option] if use rrp
# pcs cluster auth node1 node2 node1-rrp node2-rrp -u hacluster -p testtest
- setup pacemaker cluster
# pcs cluster setup --start --enable --name test-pacemaker node1 node2
- [option] if use rrp
# pcs cluster setup --start --enable --name test-pacemaker node1,node1-rrp node2,node2-rrp
- Check cluster status
# pcs status Cluster name: test-pacemaker WARNINGS: No stonith devices and stonith-enabled is not false Stack: corosync Current DC: node1 (version 1.1.23-1.el7_9.1-9acf116022) - partition with quorum Last updated: Fri Jul 16 08:02:53 2021 Last change: Fri Jul 16 08:01:50 2021 by hacluster via crmd on node1 2 nodes configured 0 resource instances configured Online: [ node1 node2 ] No resources Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
- Create ipmi fence device
위에서 설정한 vBMC내용을 참고하여 아래와 같이 추가합니다. 2node로 구성할 것이기에 split-brain을 방지하기 위하여 delay option을 추가해줍니다.
# pcs stonith create fence-node1 fence_ipmilan delay=10 ipaddr=192.168.123.1 ipport=6161 lanplus=1 login=admin passwd=testtest pcmk_host_list=node # pcs stonith create fence-node2 fence_ipmilan ipaddr=192.168.123.1 ipport=6162 lanplus=1 login=admin passwd=testtest pcmk_host_list=node2
fence-device 상태를 확인합니다.
# pcs stonith show fence-node1 (stonith:fence_ipmilan): Started node1 fence-node2 (stonith:fence_ipmilan): Started node2
fence device를 추가한 후에는 resource 등록 후 잘 동작하는지 꼭 확인해야합니다.
node2부터 reboot 시켜봅니다.
# pcs stonith fence node2 Node: node2 fenced
node2에 접속하니 잘 reboot 됐네요 반대도 당연히 테스트가 끝나야 합니다.
[node2]# uptime 08:13:01 up 0 min, 1 user, load average: 0.19, 0.05, 0.02
fence node1
# pcs stonith fence node1 Node: node1 fenced
이후 dummy resource와 vip resource를 등록해줍니다.
# pcs resource create vip ipaddr2 ip=192.168.123.160 cidr_netmask=24 # pcs resource create dummy1 ocf:pacemaker:Dummy # pcs resource create dummy2 ocf:pacemaker:Dummy # pcs resource create dummy3 ocf:pacemaker:Dummy
cluster status
# pcs status Cluster name: test-pacemaker Stack: corosync Current DC: node1 (version 1.1.23-1.el7_9.1-9acf116022) - partition with quorum Last updated: Fri Jul 16 08:19:49 2021 Last change: Fri Jul 16 08:19:26 2021 by root via cibadmin on node1 2 nodes configured 6 resource instances configured Online: [ node1 node2 ] Full list of resources: fence-node1 (stonith:fence_ipmilan): Started node1 fence-node2 (stonith:fence_ipmilan): Started node2 vip (ocf::heartbeat:IPaddr2): Started node1 dummy1 (ocf::pacemaker:Dummy): Started node2 dummy2 (ocf::pacemaker:Dummy): Started node1 dummy3 (ocf::pacemaker:Dummy): Started node2 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
vip status
# ip -o -4 a 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever 2: eth0 inet 192.168.123.161/24 brd 192.168.123.255 scope global dynamic eth0\ valid_lft 3340sec preferred_lft 3340sec 2: eth0 inet 192.168.123.160/24 brd 192.168.123.255 scope global secondary eth0\ valid_lft forever preferred_lft forever
만약 각 resource가 같은 node에서 실행되거나 하는 제한 사항이 있다면 constaint를 설정해보시기 바라며
resource를 group으로 묶을 수도 있으니 여러 실습을 통해 사용해보시기 바랍니다.
참조링크
로그인하면 댓글을 남길 수 있습니다.