This repository has been archived by the owner on Feb 8, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 45
Node Replacement
Ajay Paratmandali edited this page Jul 29, 2020
·
5 revisions
-
<node-name>
mean pacemaker cluster node Eg: srvnode-1 srvnode-2 -
<faulty node>
is Node which is gone bad -
<new node>
is new node replaced with<faulty node>
-
<working node>
is node which is part of cluster other that<faulty node>
- Poweroff faulty node
- Put node on standby if node is still running
- If node is already poweroff then ignore this step
$ pcs cluster standby <faulty-node>
$ poweroff
- To add identical node to cluster we need to remove old node
$ pcs cluster node remove <faulty-node>
- Update
/etc/hosts
<ip1> srvnode-1
<ip2> srvnode-2
- Follow Step-[1 to 3] From PCS Cluster Setup
- Authorize new node from working node
$ pcs cluster auth srvnode-1 srvnode-2
- Add node to cluster (Execute from
<working node>
)
$ pcs cluster node add `<new node>`
- After Enable and start node
- All resource will get sycn on nodes
- All Rules are get sync on node
-
<new node>
will become part of cluster and replace<faulty node>
- Enable and start node
$ pcs resource cleanup --all # To clean history
$ pcs cluster enable <new-node>
$ pcs cluster start <new-node>