Parallels H-Sphere Documentation System Administrator Guide

 

Adding Load Balanced NetApp NASs To H-Sphere

(Deprecated since H-Sphere 3.0 RC1)
 
 

Related Docs:   Understanding Load Balancing in H-Sphere Installation of Load Balanced Web/Mail Clusters in H-Sphere Load Balanced Server Clusters (Admin Guide)

Last modified: 27 Dec 2007

 

WARNING: This documentation covers Parallels H-Sphere versions up to 3.1. For the latest up-to-date Parallels H-Sphere documentation, please proceed to the official Parallels site.

It is possible to add load balanced (LB) Web and mail clusters to H-Sphere. This document dwells on load balanced cluster implementation on the basis of NetApp NAS. NetApp NASs are building blocks for open storage networks, allowing companies to simplify, share and scale their storage networking and content delivery infrastructures.

Load balanced cluster solution based on NetApp NASs requires 3 or more physical servers:

  • Load Balancer: any solution like Cytrix® NetScaler for load balancing across the web/mail servers. Load Balancer directs traffic to another server if the first one is currently overloaded.
  • NAS: Network Attached Storage like NetApp Primary Storage (NetApp Filer) for storage of web/mail content (see the list of file storage systems supported by H-Sphere). NAS may be installed on the same server with load balancer or on a separate server. Also, Web and mail servers can jointly use one NAS or have their own NAS one for Web and one for mail.
  • At least two boxes (master and slave) for web/mail servers. Load balanced solution implies one master server and one or more slave servers further referred to as Web1 (Web master), Web2 (first Web slave), Web 3 (second Web slave), ...; Mail1 (mail master), Mail2 (first mail slave), Mail3 (second mail slave), ...

To create Web/mail load balanced clusters integrated into H-Sphere:

  1. Install and configure Load Balancer and NAS
  2. Prepare NetApp NAS to work with H-Sphere
  3. Install H-Sphere on load balanced Web/mail servers
  4. Configure load balanced Web cluster
    - IP map table file on slave servers
    - Dedicated IPs
    - Configuring NAT
  5. Configure load balanced mail cluster
  6. Configure CP server to integrate load balanced clusters
  7. Add logical servers to load balanced Web/mail clusters

See also NetApp Configuration in H-Sphere 2.4.x.


 

Step 1. Install and Configure Load Balancer and NAS

1. Purchase, install and configure load balancer solution like Cytrix® NetScaler.

2. Purchase NetApp NAS from www.netapp.com, install and configure the NAS according to the NetApp Documentation. Create volumes/qtrees on the box where NetApp NAS is to be installed.

3. Do the following steps to configure your NetApp NAS to add load balanced cluster in H-Sphere (read the NetApp Manual for commands):

  1. Telnet to the NetApp NAS:

    telnet <NAS_IP>

    Here, <NAS_IP> is the NetApp NAS IP.
  2. Get the list of the NAS partitions with the qtree command:

    # qtree

  3. To enable disk quota management, export the /etc directory on the NetApp NAS and allow to mount it only from the CP box:

    # exportfs -o access=<CP_IP>,root=<CP_IP>,rw=<CP_IP> /etc

    Here, <CP_IP> is the CP server IP.
  4. To enable user disk space management on the web/mail servers, export the user storage directory on the NetApp NAS allow to mount it from the physical web/mail boxes:

    # exportfs -o access=<Master_IP>:<Slave1_IP>[:<Slave2_IP>:...],root=<Master_IP>:<Slave1_IP>[:...],rw=<Master_IP>:<Slave1_IP>[:...] <NAS_WebPath>

    Here, <Master_IP>:<Slave1_IP>[:<Slave2_IP>:...] is the list of master and slave web/mail server IPs separated with colon (:), <NAS_WebPath> is the user storage directory.
  5. Exit telnet session on the NetApp NAS.

 

Step 2. Prepare NetApp NAS to Work With H-Sphere

  1. Grant rsh access to the NetApp NAS from the CP box to root and cpanel user.
  2. Grant nfs access to the /etc directory for the CP box in rw mode.
  3. Grant nfs access to the home directory on the storage partition (e.g., /vol/vol0/home) for the CP box in rw mode with root privileges (e.g., -access=192.168.0.9:192.168.0.10, root=192.168.0.8:192.168.0.9:192.168.0.10).
 

Step 3. Install H-Sphere on Web/Mail Servers

  1. Before you install H-Sphere packages to master and slave servers, please make sure to meet the following requirements for correct load balancing:
    • All load balanced boxes must have the same OS version installed on.
    • The /hsphere directory on a Web server should not be created a separate partition!
  2. Add new physical servers to H-Sphere: one for master and others for slave Web/mail.
  3. Please make sure to place public root SSH keys of master server to slave servers, so that load balanced synchronization scripts may work without passwords.
    1. Log into CP server as cpanel user.
    2. Ssh into the master server:

      $ ssh root@<Master_IP>

    3. Create public key on the master server:

      # ssh-keygen -t dsa

    4. Log into each slave server from the master server as root and insert the contents of the /root/.ssh/id_dsa.pub file from the master server into the /root/.ssh/authorized_keys2 file on each slave server.
    5. Log from the master server into each slave server as root once again:

      # ssh root@<Slave_IP>

      Answer yes to all prompts. This will add slave servers to the list of known hosts (/root/.ssh/known_hosts) of the master server. After that, load balancing synchronization scripts will work without password prompts.
  4. Stop Web/mail H-Sphere related services on these boxes:
    • Web services: httpd, proftpd
    • Mail services: qmaild, courier-imapd, courier-imapd-ssl

    Web on Linux:

    # /etc/rc.d/init.d/httpd stop
    # /etc/rc.d/init.d/proftd stop

    Web on FreeBSD:

    # /usr/local/etc/rc.d/apache.sh stop
    # /usr/local/etc/rc.d/proftd.sh stop

    Mail on Linux:

    # /etc/rc.d/init.d/qmaild stop
    # /etc/rc.d/init.d/courier-imapd stop
    # /etc/rc.d/init.d/courier-imapd-ssl stop

    Mail on FreeBSD:

    # /usr/local/etc/rc.d/qmaild.sh stop
    # /usr/local/etc/rc.d/courier-imapd.sh stop
    # /usr/local/etc/rc.d/courier-imapd-ssl.sh stop

 

Step 4. Configure Master and Slave Web Servers

  1. On each web box, mount the NetApp storage partition to the /mnt/NAS directory:

    # mkdir /mnt/NAS
    # mount -t nfs <NAS_IP>:<NAS_WebPath> /mnt/NAS

  2. Copy the following directories to the mountpoint directory on the NetApp NAS:

    # cp -prLf /usr/local/frontpage /mnt/NAS/web1/
    # cp -prf /hsphere/local/config/httpd/ssl.shared /mnt/NAS/web1/

    On the master (web1) web box:

    # cp -prf /hsphere/* /mnt/NAS/web1/

    On the web2 slave web box:

    # cp -prf /hsphere/* /mnt/NAS/web2/

    On another slave (web3) web box:

    # cp -prf /hsphere/* /mnt/NAS/web3/

    and so on for other slave boxes.
  3. On the master Web server, create the /hsphere, /hsphere2, ... directories if you don't have them. Consider the following example for 4 slave servers:

    # mkdir /hsphere
    # mkdir /hsphere2
    # mkdir /hsphere3
    # mkdir /hsphere4
    # mkdir /hsphere5

    On each slave Web servers create the following directories:

    # mkdir /hsphere
    # mkdir /hsphere2

  4. On the master Web server, /hsphere directory should point to the previously copied web1 master Web server directory on the NetApp NAS, and /hsphere2, /hsphere3 directories to the respective slave Web server directories (web2, web3). Similarly, on each slave Web server, /hsphere should point to this slave Web server's directory, /hsphere2 to the master Web server directory.

    Thus, the two corresponding mountpoints for the /hsphere and /hsphere2 directories should be added in the /etc/fstab file on both Web servers.

    Thus, for the master Web server there should be the following mountpoints should be added in the /etc/fstab file (e.g., in case of 4 slave servers):

    <NAS_IP>:<NAS_WebPath>/web1 /hsphere nfs defaults,nfsvers=3 0 0
    <NAS_IP>:<NAS_WebPath>/web2 /hsphere2 nfs defaults,nfsvers=3 0 0
    <NAS_IP>:<NAS_WebPath>/web3 /hsphere3 nfs defaults,nfsvers=3 0 0
    <NAS_IP>:<NAS_WebPath>/web4 /hsphere4 nfs defaults,nfsvers=3 0 0
    <NAS_IP>:<NAS_WebPath>/web5 /hsphere5 nfs defaults,nfsvers=3 0 0
    <NAS_IP>:<NAS_WebPath>/web1/frontpage /usr/local/frontpage nfs defaults,nfsvers=3 0 0
    <NAS_IP>:<NAS_WebPath>/web1/ssl.shared /hsphere/local/config/httpd/ssl.shared nfs defaults,nfsvers=3 0 0

    For each slave server, the /etc/fstab file should also contain mountpoints to other directories previously copied to the NetApp NAS. For example, for the web5 slave server:

    <NAS_IP>:<NAS_WebPath>/web5 /hsphere nfs defaults,nfsvers=3 0 0
    <NAS_IP>:<NAS_WebPath>/web1 /hsphere2 nfs defaults,nfsvers=3 0 0
    <NAS_IP>:<NAS_WebPath>/web1/local/home /hsphere/local/home nfs defaults,nfsvers=3 0 0
    <NAS_IP>:<NAS_WebPath>/web1/local/var/statistic /hsphere/local/var/statistic nfs defaults,nfsvers=3 0 0
    <NAS_IP>:<NAS_WebPath>/web1/local/var/httpd/logs /hsphere/local/var/httpd/logs nfs defaults,nfsvers=3 0 0
    <NAS_IP>:<NAS_WebPath>/web1/frontpage /usr/local/frontpage nfs defaults,nfsvers=3 0 0
    <NAS_IP>:<NAS_WebPath>/web1/ssl.shared /hsphere/local/config/httpd/ssl.shared nfs defaults,nfsvers=3 0 0

  5. Check if the crontab on the Master Web server contains lines for synchronizing information between the master server Web1 and slave servers Web2, Web3, ...

    Important: For correct synchronization between servers, it is REQUIRED to have time sychronization script /usr/sbin/ntpdate in crontab on master and slave servers! It usually comes with ntp packages on every OS. This script is not included into H-Sphere. You must install it separately.

    On Linux:

    */1 * * * * /hsphere/shared/bin/rsync -e ssh /etc/passwd <Slave_IP>:/etc/passwd
    */1 * * * * /hsphere/shared/bin/rsync -e ssh /etc/shadow <Slave_IP>:/etc/shadow
    */1 * * * * /hsphere/shared/bin/rsync -e ssh /etc/group <Slave_IP>:/etc/group

    On FreeBSD:

    */1 * * * * /hsphere/shared/bin/rsync -e ssh /etc/passwd <Slave_IP>:/etc/passwd
    */1 * * * * /hsphere/shared/bin/rsync -e ssh /etc/group <Slave_IP>:/etc/group
    */1 * * * * /hsphere/shared/bin/rsync -e ssh /etc/master.passwd <Slave_IP>:/etc/master.passwd
    */1 * * * * /hsphere/shared/bin/rsync -e ssh /etc/pwd.db <Slave_IP>:/etc/pwd.db
    */1 * * * * /hsphere/shared/bin/rsync -e ssh /etc/spwd.db <Slave_IP>:/etc/spwd.db

  6. On each slave Web server create the IP map file:

    # vi /hsphere/local/config/map_table.txt

    and make the appropriate links to this file for Web and FTP:

    # cd /hsphere/local/config/httpd/sites/
    # ln -s ../../map_table.txt ./ # cd /hsphere/local/config/ftpd/sites/
    # ln -s ../../map_table.txt ./

    Insert the line of the following format to associate slave server shared IP with master server IP:

    <Master_IP>|<Slave_IP>

    Important: For each dedicated IP to be hosted on the LB Web cluster, you must also add a similar line into the map-table.txt file:

    <Master_Dedicated_IP>|<Slave_Dedicated_IP>

    To map dedicated IPs on the master Web server to respective dedicated IPs on the NetApp NAS, please refer to NAT configuration for load balanced Web cluster.

  7. To synchronize dedicated IPs on LB Web servers, the following scripts are installed and added to the crontab on the master Web server and on each slave Web server.

    On the master Web server:

    */4 * * * * /hsphere/shared/scripts/load-ballancing/master-ipsynch.pl

    On each slave Web server:

    */4 * * * * /hsphere/shared/scripts/load-ballancing/slave-ipupdate.pl

    In H-Sphere 2.5 and up, you don't need to install and configure these scripts manually - this is done automatically by H-Sphere update script while adding H-Sphere physical servers to the boxes.

  8. On the master Web server, configure separate pid and log files for the master and the slave Web servers.

    1. Stop Apache on the master and slave servers.
    2. Run the following commands to reconfigure Apache to write to separate log and pid files for the master and slave servers:

      # mkdir -p /var/log/hsphere/httpd
      # chown httpd:httpd /var/log/hsphere/httpd
      # chmod 0755 /var/log/hsphere/httpd

      # perl -pi -e 's:/hsphere/local/var/httpd/logs:/var/log/hsphere/httpd:g' /etc/rc.d/init.d/httpd /hsphere/shared/apache/bin/apachectl /etc/logrotate.d/hsphere-apache /hsphere/local/config/httpd/httpd.conf /hsphere/local/config/httpd/httpd.conf.tmpl /hsphere/local/config/httpd/httpd.conf.tmpl.custom /hsphere/local/config/httpd/php4/php.ini /hsphere/local/config/httpd/php4/php.ini.tmpl /hsphere/local/config/httpd/php4/php.ini.tmpl.custom /hsphere/local/config/httpd/php5/php.ini /hsphere/local/config/httpd/php5/php.ini.tmpl /hsphere/local/config/httpd/php5/php.ini.tmpl.custom

    3. Configure custom config file templates to comply with the scheme introduced in H-Sphere 2.5 and up:

      # /hsphere/shared/apache/bin/conf_httpd
      # cp -p /hsphere/local/config/httpd/php5/php.ini.tmpl.custom /hsphere/local/config/httpd/php5/php.ini

    4. Start Apache.
  9. Configure NAT for load balanced Web cluster.

    H-Sphere Control Panel works with only one logical server (that is, master server) for each load balanced Web cluster. To configure load balanced Web cluster with NAT, you must have NAT turned on in H-Sphere and put external Web server IP routed by the Load Balancer into correspondence with the master server's internal IP.

    For example, for a load balanced Web cluster with one master and 4 slave servers, where the master Web server's internal IP 192.168.0.100 corresponds to the external IP 12.34.56.100 bound to the Load Balancer.

    • In the ~cpanel/shiva/psoft_config/ips-map.xml file on the CP server there should be the following record:

      <ips>
      . . .
         <ip ext="12.34.56.100" int="192.168.0.100"/>
      . . .
      </ips>

    • All dedicated IPs on the master server must be also associated with corresponding IPs on the Load Balancer and similar records must be added to the ip-map.xml file:

         <ip ext="LB_Dedicated_IP" int="Master_Dedicated_IP"/>

      See also how to set the master/slave correspondence of dedicated IPs on load balanced Web servers.

    • Also, you should have external IP in the E.Manager -> DNS Manager -> Service Zone menu in admin CP. For example:

      www.test.com 3600 IN A 12.34.56.100

 

Step 5. Configure Master and Slave Mail Servers

  1. On each mail box, mount the mail storage partition to the /mnt/NAS directory:

    # mkdir /mnt/NAS
    # mount -t nfs <NAS_IP>:<NAS_MailPath> /mnt/NAS

  2. Copy the following directories to the mountpoint directory on the NetApp NAS:

    # cp -prv /hsphere/local/var/vpopmail /mnt/NAS/
    # cp -prv /var/qmail/control /mnt/NAS/
    # cp -prv /var/qmail/users /mnt/NAS/

  3. Configure /etc/fstab for mail servers:

    # vi /etc/fstab

    On the master and all slave mail servers /etc/fstab should contain the following lines:

    <NAS_IP>:<NAS_MailPath>/vpopmail /hsphere/local/var/vpopmail nfs rw 0 0
    <NAS_IP>:<NAS_MailPath>/control /var/qmail/control nfs rw 0 0
    <NAS_IP>:<NAS_MailPath>/users /var/qmail/users nfs rw 0 0

  4. On the master mail server, crontab contains all necessary H-Sphere scripts.

  5. On the slave mail server, crontab SHOULD NOT contain any H-Sphere scripts!

  6. Configure NAT like in case with load balanced Web cluster. ~cpanel/shiva/psoft_config/ips-map.xml file on the CP server there should be the following record:

    <ips>
    . . .
       <ip ext="LB_External_IP" int="Mail_Master_Internal_IP"/>
    . . .
    </ips>

    Where LB_External_IP is an external IP bound to the Load Balancer, and Mail_Master_Internal_IP is the correponding internal IP on the master mail server.

    Also, you should have external IP in the E.Manager -> DNS Manager -> Service Zone menu in admin CP. For example:

    mail.test.com 3600 IN MX 12.34.56.111

 

Step 6. Configure CP Server To Integrate Load Balanced Clusters

On the H-Sphere CP server under root:

  1. Install suidperl
    - for Linux, it could be installed, for example, like this:

    # rpm -ivh perl-suidperl-5.6.1-34.99.6.i386.rpm

    - for FreeBSD, it is already installed into the system.
  2. Set required permissions and root ownership for the fileserver-quota.pl script:

    # chown root:cpanel /hsphere/shared/scripts/fileserver-quota.pl
    # chmod 4750 /hsphere/shared/scripts/fileserver-quota.pl

  3. Set SUPPORT_NET_APP property in the hsphere.properties file:

    SUPPORT_NET_APP=TRUE

  4. Mount /etc/ directory on the NetApp NAS to the /hsphere/<NAS_IP>/etc directory on CP server, where <NAS_IP> is NetApp NAS IP.

    # mkdir /hsphere/<NAS_IP>/etc
    # mount <NAS_IP>:/etc /hsphere/<NAS_IP>/etc

  5. Check the /hsphere/<NAS_IP>/etc/quotas. There should be a line like this (if H-Sphere users have already been added):

    * user@<NAS_WebPath> 20000M 160K

  6. Check that rsh and quota are enabled:

    # rsh <NAS_IP> quota report

 

Step 7. Add Web/Mail Logical Servers to Load Balanced H-Sphere Clusters

To add Web/mail servers to load balanced H-Sphere clusters, log into admin CP:

  1. set master-slave relations between master and slave physical servers in CP
  2. Add Web/mail logical servers only to master physical servers. Don't create logical servers on slave servers. All management is run on masters only but there are scripts to replicate changes for slaves.

    Any number of slave mail servers can be created for master mail servers. Multiple slave servers for load balanced Web servers are implemented since H-Sphere 2.5 Beta 2 and up.

  3. After you have added logical server for master server (E.Manager -> L.Servers), set File Server and File Path parameters for master logical servers, where:
    - File Server is NetApp NAS name or IP address, and qtree-name;
    - File Path is a path to the mounted NAS storage directory.
  4. For example:

    File Server: NETAPP_NAS_IP:/YOUR_QTREE (e.g: 192.168.1.1:/vol/vol0)
    Edit File Path: NETAPP_NAS_PATH (e.g: file_server_path=/web0.msp0/local)

Finally, start H-Sphere related Web/mail services (those you have stopped on Step 3) on master and slave servers to run H-Sphere with load balanced clusters.


Related Docs:   Understanding Load Balancing in H-Sphere Installation of Load Balanced Web/Mail Clusters in H-Sphere Load Balanced Server Clusters (Admin Guide)



© Copyright 2017. Parallels Holdings. All rights reserved.