Cari di RHE Linux 
    Red Hat Enterprise Linux Manual
Daftar Isi
(Sebelumnya) 12 : Chapter 3. Configuring Re ...12 : Chapter 5. Configuring Ad ... (Berikutnya)

Cluster Administration

Chapter 4. Managing Red Hat High Availability Add-On With Conga

This chapter describes various administrative tasks for managing Red Hat High Availability Add-On and consists of the following sections:

4.1. Adding an Existing Cluster to the luci Interface

If you have previously created a High Availability Add-On cluster you can easily add the cluster to the luci interface so that you can manage the cluster with Conga.
To add an existing cluster to the luci interface, follow these steps:
  1. Click Manage Clusters from the menu on the left side of the luci Homebase page. The Clusters screen appears.
  2. Click Add. The Add Existing Cluster screen appears.
  3. Enter the node hostname and ricci password for any of the nodes in the existing cluster. Since each node in the cluster contains all of the configuration information for the cluster, this should provide enough information to add the cluster to the luci interface.
  4. Click Connect. The Add Existing Cluster screen then displays the cluster name and the remaining nodes in the cluster.
  5. Enter the individual ricci passwords for each node in the cluster, or enter one password and select Use same password for all nodes.
  6. Click Add Cluster. The previously-configured cluster now displays on the Manage Clusters screen.

4.2. Removing a Cluster from the luci Interface

You can remove a cluster from the luci management GUI without affecting the cluster services or cluster membership. If you remove a cluster, you can later add the cluster back, or you can add it to another luci instance, as described in Section 4.1, "Adding an Existing Cluster to the luci Interface".
To remove a cluster from the luci management GUI without affecting the cluster services or cluster membership, follow these steps:
  1. Click Manage Clusters from the menu on the left side of the luci Homebase page. The Clusters screen appears.
  2. Select the cluster or clusters you wish to remove.
  3. Click Remove.
For information on deleting a cluster entirely, stopping all cluster services removing the cluster configuration information from the nodes themselves, refer to Section 4.4, "Starting, Stopping, Restarting, and Deleting Clusters".

4.3. Managing Cluster Nodes

This section documents how to perform the following node-management functions through the luci server component of Conga:

4.3.1. Rebooting a Cluster Node

To reboot a node in a cluster, perform the following steps:
  1. From the cluster-specific page, click on Nodes along the top of the cluster display. This displays the nodes that constitute the cluster. This is also the default page that appears when you click on the cluster name beneath Manage Clusters from the menu on the left side of the luci Homebase page.
  2. Select the node to reboot by clicking the checkbox for that node.
  3. Select the Reboot function from the menu at the top of the page. This causes the selected node to reboot and a message appears at the top of the page indicating that the node is being rebooted.
  4. Refresh the page to see the updated status of the node.
It is also possible to reboot more than one node at a time by selecting all of the nodes that you wish to reboot before clicking on Reboot.

4.3.2. Causing a Node to Leave or Join a Cluster

You can use the luci server component of Conga to cause a node to leave an active cluster by stopping all cluster services on the node. You can also use the luci server component of Conga to cause a node that has left a cluster to rejoin the cluster.
Causing a node to leave a cluster does not remove the cluster configuration information from that node, and the node still appears in the cluster node display with a status of Not a cluster member. For information on deleting the node entirely from the cluster configuration, see Section 4.3.4, "Deleting a Member from a Cluster".
To cause a node to leave a cluster, perform the following steps. This shuts down the cluster software in the node. Making a node leave a cluster prevents the node from automatically joining the cluster when it is rebooted.
  1. From the cluster-specific page, click on Nodes along the top of the cluster display. This displays the nodes that constitute the cluster. This is also the default page that appears when you click on the cluster name beneath Manage Clusters from the menu on the left side of the luci Homebase page.
  2. Select the node you want to leave the cluster by clicking the checkbox for that node.
  3. Select the Leave Cluster function from the menu at the top of the page. This causes a message to appear at the top of the page indicating that the node is being stopped.
  4. Refresh the page to see the updated status of the node.
It is also possible to cause more than one node at a time to leave the cluster by selecting all of the nodes to leave the cluster before clicking on Leave Cluster.
To cause a node to rejoin a cluster, select any nodes you want to have rejoin the cluster by clicking the checkbox for those nodes and selecting Join Cluster. This makes the selected nodes join the cluster, and allows the selected nodes to join the cluster when they are rebooted.

4.3.3. Adding a Member to a Running Cluster

To add a member to a running cluster, follow the steps in this section.
  1. From the cluster-specific page, click Nodes along the top of the cluster display. This displays the nodes that constitute the cluster. This is also the default page that appears when you click on the cluster name beneath Manage Clusters from the menu on the left side of the luci Homebase page.
  2. Click Add. Clicking Add causes the display of the Add Nodes To Cluster dialog box.
  3. Enter the node name in the Node Hostname text box; enter the ricci password in the Password text box. If you are using a different port for the ricci agent than the default of 11111, change this parameter to the port you are using.
  4. Check the Enable Shared Storage Support checkbox if clustered storage is required to download the packages that support clustered storage and enable clustered LVM; you should select this only when you have access to the Resilient Storage Add-On or the Scalable File System Add-On.
  5. If you want to add more nodes, click Add Another Node and enter the node name and password for the each additional node.
  6. Click Add Nodes. Clicking Add Nodes causes the following actions:
    1. If you have selected Download Packages, the cluster software packages are downloaded onto the nodes.
    2. Cluster software is installed onto the nodes (or it is verified that the appropriate software packages are installed).
    3. The cluster configuration file is updated and propagated to each node in the cluster - including the added node.
    4. The added node joins the cluster.
    The Nodes page appears with a message indicating that the node is being added to the cluster. Refresh the page to update the status.
  7. When the process of adding a node is complete, click on the node name for the newly-added node to configure fencing for this node, as described in Section 3.6, "Configuring Fence Devices".

4.3.4. Deleting a Member from a Cluster

To delete a member from an existing cluster that is currently in operation, follow the steps in this section. Note that nodes must be stopped before being deleted unless you are deleting all of the nodes in the cluster at once.
  1. From the cluster-specific page, click Nodes along the top of the cluster display. This displays the nodes that constitute the cluster. This is also the default page that appears when you click on the cluster name beneath Manage Clusters from the menu on the left side of the luci Homebase page.

    Note

    To allow services running on a node to fail over when the node is deleted, skip the next step.
  2. Disable or relocate each service that is running on the node to be deleted. For information on disabling and relocating services, see Section 4.5, "Managing High-Availability Services".
  3. Select the node or nodes to delete.
  4. Click Delete. The Nodes page indicates that the node is being removed. Refresh the page to see the current status.

Important

Removing a cluster node from the cluster is a destructive operation that cannot be undone.

4.4. Starting, Stopping, Restarting, and Deleting Clusters

You can start, stop, and restart a cluster by performing these actions on the individual nodes in the cluster. From the cluster-specific page, click on Nodes along the top of the cluster display. This displays the nodes that constitute the cluster.
The start and restart operations for cluster nodes or a whole cluster allow you to create short cluster service outages if a cluster service needs to be moved to another cluster member because it running on a node that is being stopped or restarted.
To stop a cluster, perform the following steps. This shuts down the cluster software in the nodes, but does not remove the cluster configuration information from the nodes and the nodes still appear in the cluster node display with a status of Not a cluster member.
  1. Select all of the nodes in the cluster by clicking on the checkbox next to each node.
  2. Select the Leave Cluster function from the menu at the top of the page. This causes a message to appear at the top of the page indicating that each node is being stopped.
  3. Refresh the page to see the updated status of the nodes.
To start a cluster, perform the following steps:
  1. Select all of the nodes in the cluster by clicking on the checkbox next to each node.
  2. Select the Join Cluster function from the menu at the top of the page.
  3. Refresh the page to see the updated status of the nodes.
To restart a running cluster, first stop all of the nodes in cluster, then start all of the nodes in the cluster, as described above.
To delete a cluster entirely, perform the following steps. This causes all cluster services to stop and removes the cluster configuration information from the nodes themselves as well as removing them from the cluster display. If you later try to add an existing cluster using any of the nodes you have deleted, luci will indicate that the node is not a member of any cluster.

Important

Deleting a cluster is a destructive operation that cannot be undone. To restore a cluster after you have deleted it requires that you recreate and redefine the cluster from scratch.
  1. Select all of the nodes in the cluster by clicking on the checkbox next to each node.
  2. Select the Delete function from the menu at the top of the page.
If you wish to remove a cluster from the luci interface without stopping any of the cluster services or changing the cluster membership, you can use the Remove option on the Manage Clusters page, as described in Section 4.2, "Removing a Cluster from the luci Interface".

4.5. Managing High-Availability Services

In addition to adding and modifying a service, as described in Section 3.10, "Adding a Cluster Service to the Cluster", you can perform the following management functions for high-availability services through the luci server component of Conga:
  • Start a service
  • Restart a service
  • Disable a service
  • Delete a service
  • Relocate a service
From the cluster-specific page, you can manage services for that cluster by clicking on Service Groups along the top of the cluster display. This displays the services that have been configured for that cluster.
  • Starting a service - To start any services that are not currently running, select any services you want to start by clicking the checkbox for that service and clicking Start.
  • Restarting a service - To restart any services that are currently running, select any services you want to restart by clicking the checkbox for that service and clicking Restart.
  • Disabling a service - To disable any service that is currently running, select any services you want to disable by clicking the checkbox for that service and clicking Disable.
  • Deleting a service - To delete any services that are not currently running, select any services you want to disable by clicking the checkbox for that service and clicking Delete.
  • Relocating a service - To relocate a running service, click on the name of the service in the services display. This causes the services configuration page for the service to be displayed, with a display indicating on which node the service is currently running.
    From the Start on node... drop-down box, select the node on which you want to relocate the service, and click on the Start icon. A message appears at the top of the screen indicating that the service is being started. You may need to refresh the screen to see the new display indicating that the service is running on the node you have selected.

    Note

    If the running service you have selected is a vm service, the drop-down box will show a migrate option instead of a relocate option.

Note

You can also start, restart, disable or delete an individual service by clicking on the name of the service on the Services page. This displays the service configuration page. At the top right corner of the service configuration page are the same icons for Start, Restart, Disable, and Delete.

4.6. Backing Up and Restoring the luci Configuration

As of the Red Hat Enterprise Linux 6.2 release, you can use the following procedure to make a backup of the luci database, which is stored in the /var/lib/luci/data/luci.db file. This is not the cluster configuration itself, which is stored in the cluster.conf file. Instead, it contains the list of users and clusters and related properties that luci maintains. By default, the backup this procedure creates will be written to the same directory as the luci.db file.
  1. Execute service luci stop.
  2. Execute service luci backup-db.
    Optionally, you can specify a file name as a parameter for the backup-db command, which will write the luci database to that file. For example, to write the luci database to the file /root/luci.db.backup, you can execute the command service luci backup-db /root/luci.db.backup. Note, however, that backup files that are written to places other than /var/lib/luci/data/ (for backups whose filenames you specify when using service luci backup-db) will not show up in the output of the list-backups command.
  3. Execute service luci start.
Use the following procedure to restore a luci database.
  1. Execute service luci stop.
  2. Execute service luci list-backups and note the file name to restore.
  3. Execute service luci restore-db /var/lib/luci/data/lucibackupfile where lucibackupfile is the backup file to restore.
    For example, the following command restores the luci configuration information that was stored in the backup file luci-backup20110923062526.db:
    service luci restore-db /var/lib/luci/data/luci-backup20110923062526.db
  4. Execute service luci start.
If you need to restore a luci database but you have lost the host.pem file from the machine on which you created the backup because of a complete reinstallation, for example, you will need to add your clusters back to luci manually in order to re-authenticate the cluster nodes.
Use the following procedure to restore a luci database onto a different machine than the one on which the backup was created. Note that in addition to restoring the database itself, you also need to copy the SSL certificate file to ensure that luci has been authenticated to the ricci nodes. In this example, the backup is created on the machine luci1 and the backup is restored on the machine luci2.
  1. Execute the following sequence of commands to create a luci backup on luci1 and copy both the SSL certificate file and the luci backup onto luci2.
    [root@luci1 ~]# service luci stop[root@luci1 ~]# service luci backup-db[root@luci1 ~]# service luci list-backups/var/lib/luci/data/luci-backup20120504134051.db[root@luci1 ~]# scp /var/lib/luci/certs/host.pem /var/lib/luci/data/luci-backup20120504134051.db root@luci2:
  2. On the luci2 machine, ensure that luci has been installed and is not running. Install the package, if is not already installed.
  3. Execute the following sequence of commands to ensure that the authentications are in place and to restore the luci database from luci1 onto luci2.
    [root@luci2 ~]# cp host.pem /var/lib/luci/certs/[root@luci2 ~]# chown luci: /var/lib/luci/certs/host.pem[root@luci2 ~]# /etc/init.d/luci restore-db ~/luci-backup20120504134051.db[root@luci2 ~]# shred -u ~/host.pem ~/luci-backup20120504134051.db[root@luci2 ~]# service luci start
(Sebelumnya) 12 : Chapter 3. Configuring Re ...12 : Chapter 5. Configuring Ad ... (Berikutnya)