NEW update: Fusion-IO mezzonine UCS cards on RedHat 6 Linux memo.

Fusion IO cards on RedHat 5

You have to register at FusionIO support site to download required RPMs.

Install cards to empty slots. This HOWTO (example) is for four cards. Check with lspci command if you see the hardware:

# lspci
......
08:00.0 Mass storage controller: Fusion-io ioDimm3 (rev 01)
......

Software Installation

Compile and load driver itself:

# rpmbuild --rebuild iomemory-vsl-2.3.1.123-1.0.src.rpm
.....
# rpm -ihv /usr/src/redhat/RPMS/x86_64/iomemory-vsl-2.6.18-308.1.1.el5-2.3.1.123-1.0.x86_64.rpm

Install other required RPMs:

# rpm -ihv libfio-2.3.1.123-1.0.x86_64.rpm fio-common-2.3.1.123-1.0.x86_64.rpm \
fio-sysvinit-2.3.1.123-1.0.x86_64.rpm fio-util-2.3.1.123-1.0.x86_64.rpm \
fio-firmware-101971.4-1.0.noarch.rpm

Hardware checks and care

Load driver for first time and verify hardware state

# modprobe iomemory-vsl
# lspci | grep Fusion
0b:00.0 Mass storage controller: Fusion-io ioDimm3 (rev 01)
0e:00.0 Mass storage controller: Fusion-io ioDimm3 (rev 01)
15:00.0 Mass storage controller: Fusion-io ioDimm3 (rev 01)
18:00.0 Mass storage controller: Fusion-io ioDimm3 (rev 01)
# lspci -s 18:00.0 -v
18:00.0 Mass storage controller: Fusion-io ioDimm3 (rev 01)
        Subsystem: Fusion-io Device 1010
        Flags: bus master, fast devsel, latency 0, IRQ 84
        Memory at fbff0000 (32-bit, non-prefetchable) [size=64K]
        [virtual] Expansion ROM at e6200000 [disabled] [size=1M]
        Capabilities: [40] Power Management version 3
        Capabilities: [48] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [60] Express Endpoint, MSI 00
        Kernel driver in use: iodrive
        Kernel modules: iomemory-vsl

# fio-status

Found 4 ioDrives in this system
Fusion-io driver version: 2.3.1 build 123

Adapter: ioDrive
        Low-Profile ioDIMM Adapter SN:XXXXX
        External Power: NOT connected
        Sufficient power available: Unknown
        Connected ioDimm modules:
          fct0: ioDIMM3 SN:XXXXX

fct0    Status unknown: Driver is in MINIMAL MODE:
                Firmware is out of date. Update firmware.
        ioDIMM3 SN:XXXXX
        PCI:15:00.0
        Firmware v3.0.0, rev 36867
        Geometry and capacity information not available.
        Sufficient power available: Unknown
        Internal temperature: 44.3 degC, max 46.3 degC
.........

Let's update firmware before we start use cards. The firnware file comes with fio-firmware RPM we installed before. Check the file name before doing update

# rpm -ql fio-firmware
/usr/share/fio/firmware/iodrive_101971_4.fff
# fio-update-iodrive -d /dev/fct0  /usr/share/fio/firmware/iodrive_101971_4.fff


Warning: ioDrive at '/dev/fct0' has incomplete internal identification information.
   Even though the ioDrive will continue to run, some utilities and SDK functions
      may have problems identifying and enumerating this ioDrive.

Device ID 0 (/dev/fct0) Updating device firmware from 3.0.0.36867 to 5.0.7.101971

WARNING: DO NOT TURN OFF POWER OR RUN ANY IODRIVE UTILITIES WHILE THE FIRMWARE UPDATE IS IN PROGRESS
  Please wait...this could take a while

Progress
-------------------------
 -  0:  100%

Results
-------------------------
0: Firmware updated successfully


You MUST now reboot this machine before the new firmware will be activated!

Repeat this for every card. The warning about reboot is not a joke, the card should start it's new firmware. I'd recommend even shut server to power off state and power it up again. This is the picture after reboot:

# fio-status

Found 4 ioDrives in this system
Fusion-io driver version: 2.3.1 build 123

Adapter: ioDrive
        Fusion-io ioDrive 320GB, Product Number:FS1-002-321-CS SN:XXXXX
        External Power: NOT connected
        PCIE Power limit threshold: 24.75W
        Sufficient power available: Unknown
        Connected ioDimm modules:
          fct0: Fusion-io ioDrive 320GB, Product Number:FS1-002-321-CS SN:XXXXX

fct0    Attached as 'fioa' (block device)
        Fusion-io ioDrive 320GB, Product Number:FS1-002-321-CS SN:XXXXX
        Alt PN:FS1-SS2-321-CS
        PCI:15:00.0
        Firmware v5.0.7, rev 101971
        322.46 GBytes block device size, 396 GBytes physical device size
        Sufficient power available: Unknown
        Internal temperature: 41.3 degC, max 41.8 degC
        Media status: Healthy; Reserves: 100.00%, warn at 10.00%

Using devices

Formatting device with 4k page size will reduce memory usage of driver and improve performance. Scince Oracle use 8k as datablock size, 4k page size is quite suitable.

# fio-format -b 4K /dev/fct0
Creating a standard block device of size 322.55GBytes (300.40GiBytes).
  Using block (sector) size of 4096 bytes.

WARNING: Formatting will destroy any existing data on the device!
Do you wish to continue [y/n]? y
Formatting: [====================] (100%) |
Format successful.

Repeat for all devices. Now attach block devices:

# fio-attach /dev/fct?
Attaching: [====================] (100%) -
fioa
Attaching: [====================] (100%) /
fiob
Attaching: [====================] (100%) \
fioc
Attaching: [====================] (100%) \
fiod

Create PV aligned to 4k. Create VG and stripped over all four cards LV optimized for 64k DMA size. Create EXT3 using 4k as block size.

# for d in /dev/fioa /dev/fiob /dev/fioc /dev/fiod ; do pvcreate --dataalignment 4k $d ; done
# vgcreate vgfio /dev/fio?
# lvcreate -i 4 -I 64k  -n oradbs -L 800g vgfio
# mkfs.ext3 -j -m0 -b4096 /dev/vgfio/oradbs

Init scripts and auto mounts

It is looks like active cache resides in huge driver's memory allocation. Then it should be "flushed" to device during shutdown. This procedure includes umount, deactivating VG, deattaching fio devices. Deattaching do real final writing. This mean that these cards does not like power cut-off and data may become corrupted. Consider good backup policy (as usual).

An RPM fio-sysvinit contains initscripts that works well; you just have to configure it. First of all fix /etc/fstab with noauto attribute:

# grep vgfio /etc/fstab
/dev/vgfio/oradbs       /oradbs                 ext3    defaults,noauto 0 0

Then disable udev taking care about FusionIO cards by editing /etc/modprobe.d/iomemory-vsl.conf as follow:

# To keep ioDrive from auto loading at boot when using udev, uncomment below
blacklist iomemory-vsl
options iomemory-vsl fio_dev_wait_timeout_secs=0

# disable auto attach
# options iomemory-vsl auto_attach=0

# disable parallel attach for multiple cards
# options  iomemory-vsl parallel_attach=0

# To allow the ioDrive driver to load on SLES11, uncomment below 
# allow_unsupported_modules 1

Verify that iomemory-vsl started at boot time:

# chkconfig --list iomemory-vsl
iomemory-vsl    0:off   1:off   2:on    3:on    4:on    5:on    6:off

Configure init script to do what you need by editting /etc/sysconfig/iomemory-vsl, like:

# egrep -v "^#|^$" /etc/sysconfig/iomemory-vsl
ENABLED=1
TIMEOUT=15
VERBOSE=1
KILL_PROCS_ON_UMOUNT=1
FIO_DRIVER_MOD_OPTS=""
MD_ARRAYS=""
LVM_VGS="/dev/vgfio"
MOUNTS="/dev/vgfio/oradbs /oradbs"

Edited strings are bolded here. Reboot server and verify boot messages, You SHOULD NOT SEE lines like that:

kernel: fioinf Fusion-io ioDrive 320GB 0000:15:00.0: ***************************************************
kernel: fioinf Fusion-io ioDrive 320GB 0000:15:00.0: *** unclean shutdown detected, re-scanning log. ***
kernel: fioinf Fusion-io ioDrive 320GB 0000:15:00.0: *** this may take several minutes.              ***
kernel: fioinf Fusion-io ioDrive 320GB 0000:15:00.0: ***************************************************

Performance test

# hdparm -t /dev/vgfio/oradbs 

/dev/vgfio/oradbs:
 Timing buffered disk reads:  3254 MB in  3.00 seconds = 1084.19 MB/sec
HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device
# cd /oradbs
/oradbs # dd if=/dev/zero of=10g.file bs=1024k count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 9.44489 seconds, 1.1 GB/s
/oradbs # sync
/oradbs # dd if=10g.file of=/dev/null bs=1024k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 2.15114 seconds, 5.0 GB/s
/oradbs # dd if=10g.file of=/dev/null bs=1024k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 2.02307 seconds, 5.3 GB/s

Updated on Wed Apr 4 13:17:52 IDT 2012 More documentations here