Bare Metal Restore (BMR) using bareos file level backup

I really like network booting, deployment and recovery via PXE. However, BMR should not be dependent on any other service. You may need to restore your PXE server itself, or the server you are restoring is not resides on a PXE network. Therefore, I choose the LiveCD solution with the pre-installed and configured bareos agent.

Define BMR client

You must define a new bareos client to be used for recovery. You can do this using the configuration files or using the command:

# IP= ; NAME=bmr-fd
# echo "configure add client name=$NAME address=$IP passive=yes password=$NAME" | bconsole

As a result, the client definition will be generated, the client will be dynamically added, and a server definition will be created to be copied to client. Normally, you should copy it to the client and restart the file daemon, for example:

# scp /etc/bareos/bareos-dir-export/client/$NAME/bareos-fd.d/director/bareos-dir.conf $IP:/etc/bareos/bareos-fd.d/director/
# ssh $IP service bareos-fd restart

In our case, we will place the resulting file in the Kickstart LiveCD, see below.

Creating Linux bareos BMR LiveCD

I installed livecd-tools on my Fedora workstation, this also works on RedHat or CentOS:

# dnf install -y livecd-tools

This tool requires a kickstart file as a description of what needs to be installed on the LiveCD. My file came out small, so I posted it here entirely with comments:

$ cat centos7-bareos.ks
keyboard --vckeymap=us --xlayouts='us'
lang en_US.UTF-8
part / --size 4096
rootpw --plaintext "root"
firstboot --disable
services --disable="firewalld" --enable bareos-fd
timezone Asia/Jerusalem --isUtc
selinux --disabled
# All repo should be defined this way, even main installation repo
# A tool is not real anaconda understand full syntax.
repo --name=centos --baseurl="" --install --noverifyssl
repo --name=epel --baseurl="" --install --noverifyssl
repo --name="rpmfusion-free" --baseurl="" --install --noverifyssl
repo --name=bareos --baseurl="" --install --noverifyssl

# I've includes all FS tools that suppose to work with them.

%addon com_redhat_kdump --disable

# Configure director password:
cat > /etc/bareos/bareos-fd.d/director/bareos-dir.conf << EOFcat
Director {
  Name = bareos-dir
  Password = "[md5]d23e04be17b632f51925640476f89bbb"
# Make nice Welcome message on console:
cat > /etc/issue << EOFcat
Bare Metal Restore !

 Welcome to LiveCD based on \\S
 with preinstalled bareos client.

Use "root" user with "root" password.
You can connect via SSH to my IP: \4

# Set the hostname:
echo bmr > /etc/hostname

Now, run the command:

# livecd-creator --verbose --config=centos7-bareos.ks --cache=/tmp/livecd-cache --fslabel=bareos-bmr |& tee command.log

--cache will reuse already downloaded RPMs when runs again.

--fslabel will set label, otherwise it will be generated one.

BMR linux using this LiveCD

Boot the target system under restore from the created BMR LiveCD. It will display several useful messages on the console - use this information to login via SSH. Its DHCP IP address is different from the one we defined at the director. You can make the desired IP address fixed at the DHCP server. But the easiest way is to update the client definitions at the director. Update the /etc/bareos/bareos-dir.d/client/bmr-fd.conf file with the correct IP address and ask the director to re-read it:

# vi /etc/bareos/bareos-dir.d/client/bmr-fd.conf
# echo reload | bconsole

Then check connection to the client:

# echo "status client=bmr-fd" | bconsole
Connecting to Director localhost:9101
1000 OK: bareos-dir Version: 17.2.4 (21 Sep 2017)
Enter a period to cancel a command.
status client=bmr-fd
Connecting to Client bmr-fd at

bareos-fd Version: 17.2.4 (21 Sep 2017)  x86_64-redhat-linux-gnu redhat CentOS Linux release 7.4.1708 (Core) 
Daemon started 04-Feb-19 13:59. Jobs: run=0 running=0.
 Heap: heap=135,168 smbytes=28,511 max_bytes=28,900 bufs=64 max_bufs=67
 Sizeof: boffset_t=8 size_t=8 debug=0 trace=0 bwlimit=0kB/s

Running Jobs:
bareos-dir (director) connected at: 04-Feb-19 13:59
No Jobs running.

Terminated Jobs:
You have messages.

The next step is to make disk layout like it was on an original system. Create the necessary partitions, logical volumes and file systems with the correct type, labels and UUID, if necessary. The file /etc/fstab can help us with this, let's restore it and test the recovery process at the same time.

# echo "restore where=/tmp client=RESTORING-CLIENT restoreclient=bmr-fd file=/etc/fstab current yes" | bconsole

Replace RESTORING-CLIENT with the name of the bareos client for the server for which BMR is performed. Check the status of job on the console. The resulting file should be in your BMR LiveCD instance as /tmp/etc/fstab. After examining the file, I found that the original system uses LVM with two VGs: rootvg and hanavg. In this case, the files /etc/lvm/backup/rootvg and /etc/lvm/backup/hanavg should be restored to their original location. Create file contains desired restoring files:

bareos-dir# echo -e "/etc/lvm/backup/rootvg\n/etc/lvm/backup/hanavg" > /tmp/list
bareos-dir# echo "restore where=/ client=RESTORING-CLIENT restoreclient=bmr-fd file=</tmp/list current yes" | bconsole

Here is an example of rebuilding LVM layout. PV size should be equal or greater that original source, it is obvious. If destination PV is greater, use pvresize command later to use free space.

# vgcfgrestore -l rootvg
  No archives found in /etc/lvm/archive.
  File:         /etc/lvm/backup/rootvg
  Couldn't find device with uuid n9mLtI-FbzB-EYON-xD3W-Bitp-lKXg-BGYXyu.
  VG name:      rootvg
  Description:  Created *after* executing 'lvresize -L2g /dev/rootvg/slash'
  Backup Time:  Thu Oct 27 11:39:35 2016

Perfect ! Got a missing PV UUID. Lets recreate it:

# pvcreate --uuid n9mLtI-FbzB-EYON-xD3W-Bitp-lKXg-BGYXyu --restorefile /etc/lvm/backup/rootvg /dev/sda2
  Couldn't find device with uuid n9mLtI-FbzB-EYON-xD3W-Bitp-lKXg-BGYXyu.
  Physical volume "/dev/sda2" successfully created.
# vgcfgrestore rootvg
  Restored volume group rootvg
# vgs
  VG     #PV #LV #SN Attr   VSize   VFree 
  rootvg   1   4   0 wz--n- <10.00g <2.00g
# vgchange -ay /dev/rootvg
  4 logical volume(s) in volume group "rootvg" now active
# lvs
  LV    VG     Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  slash rootvg -wi-a----- 2.00g                                                    
  swap  rootvg -wi-a----- 1.00g                                                    
  usr   rootvg -wi-a----- 3.00g                                                    
  var   rootvg -wi-a----- 2.00g

Repeat the same for second VG:

# vgcfgrestore -l hanavg
  No archives found in /etc/lvm/archive.

  File:         /etc/lvm/backup/hanavg
  Couldn't find device with uuid AOTjG8-xlwi-0eHm-35ri-i0ZJ-8IrG-2Ao0P4.
  VG name:      hanavg
  Description:  Created *after* executing 'lvresize -L+1g /dev/hanavg/sap'
  Backup Time:  Fri Dec 14 10:59:32 2018
# pvcreate --uuid AOTjG8-xlwi-0eHm-35ri-i0ZJ-8IrG-2Ao0P4 --restorefile /etc/lvm/backup/hanavg /dev/sdb
  Couldn't find device with uuid AOTjG8-xlwi-0eHm-35ri-i0ZJ-8IrG-2Ao0P4.
  Physical volume "/dev/sdb" successfully created.
# vgcfgrestore hanavg
  Restored volume group hanavg
# vgchange -ay /dev/hanavg
  5 logical volume(s) in volume group "hanavg" now active

Now format every LV with FS type as at /etc/fstab and mount them somehere, let's say /mnt/restore. If fstab use UUID or LABEL as device reference, you should create FS with same parameters. It is not true for my system, then I'll just create FS.

# df
Filesystem                1K-blocks    Used Available Use% Mounted on
/dev/mapper/live-rw         3997376 1113968   2841472  29% /
devtmpfs                   32961552       0  32961552   0% /dev
tmpfs                      32988424       0  32988424   0% /dev/shm
tmpfs                      32988424   17224  32971200   1% /run
tmpfs                      32988424       0  32988424   0% /sys/fs/cgroup
/dev/sr0                     503206  503206         0 100% /run/initramfs/live
tmpfs                       6597688       0   6597688   0% /run/user/0
/dev/mapper/rootvg-slash    1998672    3184   1890632   1% /mnt/restore
/dev/sda1                    245679    2095    243584   1% /mnt/restore/boot
/dev/mapper/rootvg-usr      3030800    4680   2868836   1% /mnt/restore/usr
/dev/mapper/rootvg-var      1998672    3140   1890676   1% /mnt/restore/var
/dev/mapper/hanavg-sap     21543484   45124  21498360   1% /mnt/restore/usr/sap
/dev/mapper/hanavg-shared  56631612   53216  56578396   1% /mnt/restore/hana/shared
/dev/mapper/hanavg-data    20511356   45124  20466232   1% /mnt/restore/hana/shared/HAC/global/hdb/data
/dev/mapper/hanavg-log     20511356   45124  20466232   1% /mnt/restore/hana/shared/HAC/global/hdb/log
/dev/mapper/hanavg-backup 123723748   61116 123662632   1% /mnt/restore/hana/backup

It is restore time:

root@bareos:~ # echo "restore where=/mnt/restore client=RESTORING-CLIENT restoreclient=bmr-fd select current all done yes" | bconsole

When finished, a boot record should be installed. Some pseudo filesystems are exluded during backup, then their mount points should be recreated manually. Very important to create /tmp directory with correct permissions. The restored server is SuSE11, then grub reinstall shown:

# mkdir /mnt/restore/{proc,sys,dev}
# mkdir -m1777 /mnt/restore/tmp /mnt/restore/var/tmp
# mount -t proc proc /mnt/restore/proc
# mount -t sysfs sys /mnt/restore/sys
# mount -o bind /dev /mnt/restore/dev
# mount -o bind /dev/pts /mnt/restore/dev/pts
# chroot /mnt/restore
# grub
grub> root (hd0,0)
 Filesystem type is ext2fs, partition type 0x83

grub> setup (hd0)
 Checking if "/boot/grub/stage1" exists... yes
 Checking if "/boot/grub/stage2" exists... yes
 Checking if "/boot/grub/e2fs_stage1_5" exists... yes
 Running "embed /boot/grub/e2fs_stage1_5 (hd0)"...  17 sectors are embedded.
 Running "install /boot/grub/stage1 (hd0) (hd0)1+17 p (hd0,0)/boot/grub/stage2 /boot/grub/menu.lst"... succeeded
grub> quit

I've tried restore Windows machine using this LiveCD without success. An included linux NTFS driver does not suit for this purpose.

Updated on Fri Feb 8 22:47:40 IST 2019 More documentations here