Parallels H-Sphere Documentation System Administrator Guide

 

Installation of Load Balanced Web/Mail Clusters in H-Sphere

(H-Sphere 3.0 RC 1 +; updated for 3.0 RC 4)
 
 

Related Docs:   Understanding Load Balancing Implementation of Load Balanced Cluster in H-Sphere Load Balanced Server Clusters (Admin Guide)

Last modified: 27 Dec 2007

 

WARNING: This documentation covers Parallels H-Sphere versions up to 3.1. For the latest up-to-date Parallels H-Sphere documentation, please proceed to the official Parallels site.

This document explains how to add Web/mail load balanced clusters to H-Sphere 3.0 and up.

WARNING:
Mail server cluster implementation is supported since H-Sphere 3.0 RC 4.

Load balanced cluster solution requires 3 or more physical servers:

  • Load Balancer: any solution like Cytrix® NetScaler for load balancing across the web/mail servers. Load Balancer directs traffic to another server if the first one is currently overloaded.
  • NAS (aka Filer): Server/client shared storage solution for web/mail content. NAS may be installed on the same server with load balancer or on a separate server. Also, Web and mail servers can jointly use one NAS or have their own NAS one for Web and one for mail. In this documentation we consider the following NAS's:

    • Generic Linux NFS
    • NetApp Filer hardware
    • RedHat GFS ( HS 3.0 RC 4+)
  • At least two boxes (master and slave) for web/mail servers. Load balanced solution implies one master server and one or more slave servers.

To create Web/mail load balanced clusters integrated into H-Sphere:

  1. Install and configure Load Balancer
  2. Prepare NAS
  3. Prepare master and slave Web/mail boxes
  4. Install H-Sphere to load balanced Web/mail clusters

 

Step 1. Install and Configure Load Balancer

Purchase, install and configure load balancer solution like Cytrix® NetScaler.

 

Step 2. Prepare NAS

 

NetApp Hardware

If you are using NetApp hardware, follow this procedure:

1. Purchase NetApp NAS from www.netapp.com, install and configure the NAS according to the NetApp Documentation. Create volumes/qtrees on the box where NetApp NAS is to be installed.

2. Configure your NetApp NAS to add load balanced cluster in H-Sphere (read the NetApp Manual for commands):

  1. Telnet to the NetApp NAS:

    telnet <NAS_IP>

    Here, <NAS_IP> is the NetApp NAS IP.
  2. Get the list of the NAS partitions with the qtree command:

    # qtree

  3. To enable disk quota management, export the /etc directory on the NetApp NAS and allow to mount it only from the CP box:

    # exportfs -o access=<CP_IP>,root=<CP_IP>,rw=<CP_IP> /etc

    Here, <CP_IP> is the CP server IP.
  4. To enable user disk space management on the web/mail servers, export the user storage directory on the NetApp NAS allow to mount it from the physical web/mail boxes:

    # exportfs -o access=<Master_IP>:<Slave1_IP>[:<Slave2_IP>:...],root=<Master_IP>:<Slave1_IP>[:...],rw=<Master_IP>:<Slave1_IP>[:...] <NAS_WebPath>

    Here, <Master_IP>:<Slave1_IP>[:<Slave2_IP>:...] is the list of master and slave web/mail server IPs separated with colon (:), <NAS_WebPath> is the user storage directory.
  5. Exit telnet session on the NetApp NAS.

3. Prepare NetApp NAS to Work With H-Sphere

  1. Grant rsh access to the NetApp NAS from the CP box to root and cpanel user.
  2. Grant nfs access to the /etc directory for the CP box in rw mode.
  3. Grant nfs access to the home directory on the storage partition (e.g., /vol/vol0/home) for the CP box in rw mode with root privileges (e.g., -access=192.168.0.9:192.168.0.10, root=192.168.0.8:192.168.0.9:192.168.0.10).
  4. On CP server, set the QUOTA_MANAGER property in ~cpanel/shiva/psoft_config/hsphere.properties to support NetApp quota manager on LB cluster:

    QUOTA_MANAGER = NET_APP

    More about external quota manager support in H-Sphere.

 

Generic Linux NFS

If you are using Linux NFS shared storage, follow this procedure:

Important: For correct load balanced cluster implementation, NFS must be of version 3.

  1. Login as root to a new Linux server assigned for NAS and create a separate partition for shared file storage. This partition must be on a separate hard drive on a separate controller and must not be /var or /usr. We recommend naming it /filer to avoid possible confusion.

  2. Install/update the quota-3.x package from the following location:

    # rpm -Uvh http://www.psoft.net/shiv/HS/<OSCODE>/sysutils/quota-3.xx-x.i386.rpm

    where <OSCODE> is a mnemonic code for operating system supported by H-Sphere (see OSCODE notation in H-Sphere packages' download locations).

    Important: The quota package from psoft.net includes NFS support, which is essential for load balanced cluster implementation. Generic quota package has NFS support disabled by default!

  3. Add the "usrquota" option to the /filer partition in /etc/fstab:

    LABEL=/filer /filer ext3 defaults,usrquota 1 1

    To apply changes, run:

    # mount -o remount /filer
    # quotacheck -m /filer
    # quotaon /filer

  4. On the /filer partition, create directories for load balanced cluster file storage:

    # mkdir -p /filer/<CLUSTER_TYPE>_<CLUSTER_ID>/hsphere

    where <CLUSTER_TYPE> is web or mail, and <CLUSTER_ID> is a cluster id (there may be multiple clusters mounted to the same NAS).

    For example, for the first Web cluster it will be /filer/web_01/hsphere.

  5. Stop all services except ssh, portmap, and nfs related services. Check the status by the chkconfig command:

    # chkconfig --list

  6. To enable user disk space management on the web/mail servers, export the user storage directory on the generic Linux NAS. For this, add the following lines for all clusters to the /etc/exports file on the NAS server:

    /filer/<CLUSTER_TYPE>_<CLUSTER_ID>/hsphere <Master_IP>(rw,async,no_wdelay,insecure,no_root_squash)
    /filer/<CLUSTER_TYPE>_<CLUSTER_ID>/hsphere <Slave1_IP>(rw,async,no_wdelay,insecure,no_root_squash)
    /filer/<CLUSTER_TYPE>_<CLUSTER_ID>/hsphere <Slave2_IP>(rw,async,no_wdelay,insecure,no_root_squash)
    ...

    To apply changes, run:

    # exportfs -a

  7. Skip this step for mail server clusters.

    Edit the /etc/init.d/nfs file. Find the line with daemon rpc.rquotad and add the -S option to the end of the line, like this:

    daemon rpc.rquotad $RPCRQUOTADOPTS -S

    After that, restart NFS:

    # chkconfig --level 345 nfs on
    # /etc/init.d/nfs restart

Important: NFS configuration on the NAS may differ depending on the hardware parameters, the number of clusters, quota and load on the servers. To properly configure NAS please refer to the following guides:

https://www.redhat.com/f/pdf/rhel4/NFSv4WP.pdf
http://www.citi.umich.edu/projects/nfs-perf/results/cel/dnlc.html
http://www.oreilly.com/catalog/nfs2/chapter/ch15.html

 

RedHat GFS

If you are going to use RedHat GFS filer, follow this procedure:

1. Install and configure RedHat GFS cluster on a filer according to the following documentation:

http://www.redhat.com/docs/manuals/csgfs/browse/rh-gfs-en/index.html
http://www.redhat.com/docs/manuals/csgfs/browse/rh-cs-en/index.html
http://www.tldp.org/HOWTO/LVM-HOWTO/index.html

2. Setup GFS file system type on a logical volume on the filer, like this:

# gfs_mkfs -p lock_type -t cluster_name:gnbd_device -j 2 /dev/vg_name/lv_name

where lock_type is a GFS locking type, cluster_name is a GFS cluster name, gnbd_device is a GNBD device name, vg_name is a volume group name, and lv_name is a logical volume name. Futher on in the document we will use the following example:

# gfs_mkfs -p lock_dlm -t alpha_cluster:gfs1 -j 2 /dev/vg01/lv01

3. Start GNBD server:

# gnbd_serv

Upon successful start, you'll get the following output:

gnbd_serv: startup succeeded

4. Export logical volume with GFS file system:

# gnbd_export -d /dev/vg01/lv01 -e gfs1

You should get the following output:

gnbd_clusterd: connected
gnbd_export: created GNBD gfs1 serving file /dev/vg01/lv01

 

Step 3. Prepare Master and Slave Web/Mail Boxes

Before you install H-Sphere packages to master and slave servers, please make sure to meet the following requirements for correct load balancing:

  • All boxes in LB cluster must have the same OS version installed on. For RedHat GFS, all servers must be RedHat servers. In case of generic Linux NFS or NetApp, master/slave servers under FreeBSD are supported in HS 3.0 RC 4 and up.
  • The /hsphere directory on a Web server should not be created a separate partition!

The operations on master and slave servers are made under root.

  1. Create the /hsphere directory on the master and all slave servers:

    # mkdir /hsphere

  2. If you are using GFS, run on each master and slave servers:

    1. Load kernel module:

      # modprobe gnbd

    2. Import GFS file system from the NAS server:

      # gnbd_import -i FILER_NAME

      where FILER_NAME is the NAS server domain name. You should get the following output:

      gnbd_import: created gnbd device gfs1
      gnbd_monitor: gnbd_monitor started. Monitoring device #0
      gnbd_recvd: gnbd_recvd started

  3. Mount the storage directory on the NAS server to /hsphere directory on the master and all slave servers.

    a) For RedHat GFS:

    Add the following mountpoint to /etc/fstab on the master and all slave servers:

    /dev/gnbd/gfs1 /hsphere gfs defaults 0 0

    Mount the GFS logical volume on the master and all slave servers:

    # mount -t gfs /dev/gnbd/gfs1 /hsphere

    b) For generic Linux NFS or NetApp:

    Add the following mountpoint to /etc/fstab on the master and all slave servers:

    <NAS_IP>:/filer/<CLUSTER_TYPE>_<CLUSTER_ID>/hsphere /hsphere nfs defaults,nfsvers=3,rsize=32768,wsize=32768 0 0

    For mail server cluster, also add these mountpoints on all servers:

    <NAS_IP>:/filer/<CLUSTER_TYPE>_<CLUSTER_ID>/users /var/qmail/users nfs defaults,nfsvers=3 0 0
    <NAS_IP>:/filer/<CLUSTER_TYPE>_<CLUSTER_ID>/control /var/qmail/control nfs defaults,nfsvers=3 0 0

    To mount the directory, run:

    # mount -a && mount

  4. On the master server, create the /hsphere/local/config/lb.map file of the following format:

    <Master_IP>|<Slave1_IP>|...|<SlaveN_IP>

    Note: The lines of the same format should be also added for each dedicated IP bound on the cluster:

    <Master_Dedicated_IP>|<Slave1_Dedicated_IP>|...|<SlaveN_Dedicated_IP>

  5. On every master and slave server, create the /etc/hsphere/lb.id file with the line of the following format:

    <CLUSTER_TYPE>|<SERVER_ID>

    where <CLUSTER_TYPE> is mail or web; <SERVER_ID> is LB server id: 0 for master, 1 for the first slave, 2 for the second slave, etc.

    For example, for slave server with <Slave2_IP> in LB Web cluster the lb.id file will look like:

    web|2

  6. Generate SSH keys to access the master's root from each slave server without password.

    1. Log into each slave server as root.
    2. Create public key on each slave server:

      # ssh-keygen -t dsa

    3. Log from each slave server to the master server as root and insert the contents of the /root/.ssh/id_dsa.pub file from each slave server into the /root/.ssh/authorized_keys2 file of the master server.
    4. Log from the each slave server into the master server as root once again to ensure slave servers are able to log into the master without password:

      # ssh root@<Master_IP>

      Answer yes to all prompts. This will add the master server to the list of known hosts (/root/.ssh/known_hosts) of a slave server. After that, load balancing synchronization scripts will work without password prompts.
  7. Important: To make sure H-Sphere related data is correctly synchronized on master and slave servers, add time synchronization to the master and slave servers' crontabs.

 

Step 4. Install H-Sphere to Load Balanced H-Sphere Clusters

  1. Log into H-Sphere admin CP (it is assumed you have H-Sphere 3.0 and up already installed).

  2. Add master and all slave servers as physical servers to H-Sphere.

  3. Set master-slave relations between master and slave physical servers.

  4. Add Web/mail logical server only to master physical server. Do not add logical servers to slave servers!

    In logical server options you need to set Load Balancer Server Parameters:

    • File Server Type: file storage OS type, like UNIX for generic Linux NFS;
    • File Server: file storage volume location, like <NAS_IP>:/filer/<CLUSTER_TYPE>_<CLUSTER_ID>/ in the above example;
    • File Path: (optional) file storage path to H-Sphere installation directory, like /filer/<CLUSTER_TYPE>_<CLUSTER_ID>/hsphere in the above example;
    • File server Volume ID: file storage volume ID, like <CLUSTER_TYPE>_<CLUSTER_ID> in the above example.
  5. For mail LB cluster, it is required to configure Horde Webmail frontend to use external Web server and external MySQL database server, and also to configure SpamAssassin to external MySQL database.

  6. Configure NAT for LB Web/mail clusters

    H-Sphere Control Panel works with only one logical server (that is, master server) for each load balanced Web cluster. To configure load balanced Web/mail cluster with NAT, you must have NAT turned on in H-Sphere and put external Web/mail server IP routed by the Load Balancer into correspondence with the master server's internal IP.

    For example, for a load balanced Web cluster with one master and 4 slave servers, where the master Web server's internal IP 192.168.0.100 corresponds to the external IP 12.34.56.100 bound to the Load Balancer.

    • In the ~cpanel/shiva/psoft_config/ips-map.xml file on the CP server there should be the following record:

      <ips>
      . . .
         <ip ext="12.34.56.100" int="192.168.0.100"/>
      . . .
      </ips>

    • All dedicated IPs on the master server must be also associated with corresponding IPs on the Load Balancer and similar records must be added to the ip-map.xml file:

         <ip ext="LB_Dedicated_IP" int="Master_Dedicated_IP"/>

    • Also, you should have external IP in the E.Manager -> DNS Manager -> Service Zone menu in admin CP. For example:

      www.test.com 3600 IN A 12.34.56.100
      mail.test.com 3600 IN MX 12.34.56.111

  7. Download the latest H-Sphere updater of H-Sphere 3.0 RC 1 and up and follow the instructions on adding new servers into H-Sphere to install H-Sphere related packages only on master server.

  8. In the updater's command line, run one of the following commands to complete installation and configuration on slave servers:

    hspackages slaves=web
    hspackages slaves=mail
    hspackages slaves=all

    More about updater options.


Related Docs:   Understanding Load Balancing Implementation of Load Balanced Cluster in H-Sphere Load Balanced Server Clusters (Admin Guide)



© Copyright 2017. Parallels Holdings. All rights reserved.