NFS Ganesha HA resource

This article is about adding an NFS Ganesha resource to an existing cluster like this one.

Installing ganesha software

The required RPMs are in PackageHub, which is not enabled by default and you must subscribe to it:

# SUSEConnect -r REGISTRATIONCODE -e your@email # --debug
 ..
Successfully registered system.
# SUSEConnect --list-extensions
# SUSEConnect -p PackageHub/xxx/x86_64
# zypper install nfs-ganesha-xfs

Do this on both nodes.

Prepare the shared disk

My shared disk in this example is /dev/sda:

root@node1:~ # pvcreate /dev/sda
  Physical volume "/dev/sda" successfully created.
root@node1:~ # vgcreate nfsvg /dev/sda
  Volume group "nfsvg" successfully created
root@node1:~ # lvcreate -l100%FREE -n export /dev/nfsvg
  Logical volume "export" created.
root@node1:~ # mkfs.xfs -K /dev/nfsvg/export
meta-data=/dev/nfsvg/export      isize=512    agcount=4, agsize=65280 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0, rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=261120, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=855, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

As you can see, I prepared the disk as a regular, non-clustered disk. This is because I am building an active-passive NFS redundant server and this disk will only be mounted on one node.

The next step is to modify /etc/lvm/lvm.conf to define the volume_list parameter:

root@node1:~ # grep volume_list /etc/lvm/lvm.conf 
volume_list = [ "rootvg" ]

This means that at boot time only the "rootvg" will be activated and no one will have to deal with our shared drive except the cluster software. After changing the LVM configuration, a new initrd boot image must be created, otherwise activation may occur at boot time:

# mkinitrd
 ...
# reboot

Reboot both servers now to see the new initrd is working. When both nodes are up, run on either of them:

# crm configure primitive vg-nfs LVM params volgrpname=nfsvg exclusive=true
# crm resource status vg-nfs
resource vg-nfs is running on: node1
- where vg-nfs is a name of resource, LVM is a type of resource and rest are parameters.

Now, add a Filesystem resource:

# crm configure primitive fs-export Filesystem params device="/dev/nfsvg/export" directory="/export" fstype=xfs options=dax
# crm resource status fs-export
resource fs-export is running on: node1 

Configure Ganesha NFS

A freshly installed ganesha-xfs has two configuration files in /etc/ganesha. This is a fully commented out ganesha.conf and example for xfs.conf. However, the main configuration file is ganesha.conf, and other configuration files must be explicitly included with the %include statement. Correct the config file:

# cat ganesha.conf
NFSv4 {
        Lease_Lifetime = 10;
        Grace_Period = 20;
}

EXPORT {
        Export_Id = 1;
        Path = /export;
        Pseudo = /export;
        FSAL {
                Name = XFS;
        }
        CLIENT {
                Clients = 192.168.101.0/24;
                Access_Type = RW;
        }
}

Synchronize the file with the second node and define the following resource:

root@node1:~ # rsync -av /etc/ganesha/ node2:/etc/ganesha/
./
ganesha.conf
# crm configure primitive svc-ganesha systemd:nfs-ganesha
# crm resource status svc-ganesha
resource svc-ganesha is running on: node2

As you can see, the resource is running on another free node, even though the /export is not mounted there. In the next step we will add a virtual IP address and then group all resources together:

# crm configure primitive ip-ganesha IPaddr2 params ip=192.168.101.100 cidr_netmask=24
# crm resource status ip-ganesha
resource ip-ganesha is running on: node2
# crm configure group ganesha vg-nfs fs-export ip-ganesha svc-ganesha
# crm resource cleanup
Cleaned up all resources on all nodes
Waiting for 1 reply from the controller. OK
# crm status 
Cluster Summary:
  * Stack: corosync
  * Current DC: node1 (version 2.0.4+20200616.2deceaa3a-3.3.1-2.0.4+20200616.2deceaa3a) - partition with quorum
  * Last updated: Wed Dec 16 20:16:48 2020
  * Last change:  Wed Dec 16 20:16:44 2020 by hacluster via crmd on node2
  * 2 nodes configured
  * 4 resource instances configured

Node List:
  * Online: [ node1 node2 ]

Full List of Resources:
  * Resource Group: ganesha:
    * vg-nfs    (ocf::heartbeat:LVM):    Started node1
    * fs-export (ocf::heartbeat:Filesystem):     Started node1
    * ip-ganesha        (ocf::heartbeat:IPaddr2):        Started node1
    * svc-ganesha       (systemd:nfs-ganesha):   Started node1


Updated on Wed Dec 16 21:00:00 IST 2020 More documentations here