Dar Documentation

Flexibly Restoring a whole system with dar

Introduction

Restoration is usually the most tricky part of a backup process. The backup process designs the whole process of creating backups, storing them in a secured place, protecting backup data against unauthorized access, against corruption over time, rebuilding a whole system from scratch upon major failure, system corruption, security breach,... It may concern a single system (a host with its operating system, applications, configurations, user data), a set of systems independent systems, but also "recursive systems" like an hypervisors and their many virtual possible machines. (we will illustrate that later case also in this document).

The second purpose of a backup process is to provide file history, in order to be able to restore a file deleted by mistake (even long after the mistake was made), corrupted, or to get back this file(s) in the state it had for a previous application version, which succeed when a software upgrade brakes a legacy feature you need more than the new features.

But there is not a single backup process that matches the need of all. For example, syncing your local data in the cloud is easy and may be suitable for personal use (well, depending on your privacy level of consideration...). But as it also exposes all your data, values, proprietary software, patents, to the eye of the cloud provider, it may thus not be suitable for companies having production secrets, secret recipes that constitute the source of their revenue. It may neither be suitable for a individual fighting for human rights and for freedom in a country where these natural rights are banished. And last, it does not let you rebuild your whole system: Saving only your documents will have allow you to reinstall all applications and their particular configurations you had adapted to your needs over time, as well as eventually finding or rebuying the license keys to active the proprietary software you were using.

At the opposite, restoring a whole system with not only the user data but also the application binaries, configurations, operating system,... in the state it had at the time of the backup requires some skills and knowledge. The objective of this document is to provide some tested recipies to help anyone new to this operation, using Disk ARchive (dar) as backup tool under Linux and more generally Unixes (including MaCOS X).

Some notes about Dar software:
At the opposite of backup tools that copy bytes verbatim from the disk to a file, dar keeps traces of files inside the file-system, it stores every possible thing related to these files (metadata, attributes, Extended Attributes, data, sparse nature of files).

The advantages are:

The drawbacks are that you will have to manually recreate the disk partitions and format the file-system as you want, in order to restore files into them. The objective of this document is thus to explain how to do that and let you see that this task is not complex and brings a lot of freedom. In the second part of this document, the variation will let you see what changes when considering LVM, LUKS and a Proxmox VE hypervisor.

Backup creation

What to backup

Do I have to backup Everything? Well, in fact no. You can exclude all virtual file-systems like /dev /proc and /sys (see dar's -P option) as well as any temporary and cache directory (/tmp /var/tmp /var/cache/apt/archives /var/backups /home/*cache*/*...) and the directories named "lost+found" that will be recreated at restoration time while formatting the target file-system. If you use LVM to store you system, you might be interested just for further reference, in recording within the backup the output of the lsblk command, that gives the current partitions, Virtual Group name, Logical Volume names and their usage in the running system at the time of the backup (see -< and -= options, below).

Here is an example of configuration used on a Proxmox system (Debian based kvm hypervisor). For more details refer to the man page, but in summary here is the options used and their meaning:

-R option
Defines the root of the data to backup. Here the backup scope is system wide so we give it "/" as argument
-am
Let using the ordered and natural mask combinaison
-D
When excluding a directory (like /sys for example) store the directory as empty in the backup, this way at restoration time the mount-point will be recreated
-P
prunes the directory given in argument (which is relative to the -R root, so -P dev excludes /dev here). It can be used multiple times.
-g
derogate to a previous -P option by including the directory given in argument
-z/-zbzip2
compress the backup here with bzip2 algorithm
-s
split backup in several files (called slices) to avoid having a possibily huge file
-B
includes other options defined in the file given in argument
compress-exclusion
is a set of options (a so called "target") defined in /etc/darrc that provides a long list of files types that do not worth trying to compress (already compressed files, for example)
no-emacs-backup
is another target avoiding to save emacs temporary backup files
bell
yet another target still defined in /etc/darrc that makes the terminal ring upon user interaction request
-E
execute the provided command after each created slice, here we run a script that lead par2 to generate parity data for each slice
--slice-mode
defines the permission of the backup slices that will be created
--retry-on-change
as we perform the backup of a live system, we need retry saving up to 3 times any file that changed at the time it was read for backup.
-<
when entering the /root directory execute the command provided with -= option
-=
execute the provided command when saving a directory or file referred by -< option

As the backup part of the process is recurrent, it is suitable to drop all these options in a configuration file (here /root/.darrc for those option to be used by default):

root@soleil:~# cat .darrc all: -R / create: -am -D -P dev -P run -P sys -P proc -P tmp -P var/lib/vz # this is where proxmox stores VM backups so we save the directory: -g var/lib/vz/dump -P var/lib/lxcfs -P var/cache/apt/archives -P etc/pve -P var/backups -P lost+found -P */lost+found -P root/tmp -P mnt -zbzip2 -s 1G --nodump --cache-directory-tagging -B /etc/darrc compress-exclusion no-emacs-backup bell # will calculate the parity file of each generated slices -E "/usr/share/dar/samples/dar_par_create.duc %p %b %N %e %c 1" --slice-mode 0640 --retry-on-change 3:1024000 # when entering the /root directory, dar will run lsblk and store its # output into /root/lsblk.txt then this file will be part of the backup # as we have not excluded it (by mean of -P, -X, -] and similar options) -< root -= "/bin/lsblk > %p/lsblk.txt"

Dar_static

We will copy dar_static binary beside the backup to not rely on anything else for restoration. Some user also add a bit of dar documentation (including this document), that's up to you to decide.

Ciphering

If backup has to be ciphered (-K option), better use symmetric encryption algorithm, than assymmetrical: For the first, you will be asked for the passphrases to decipher the backup and restore your data, while with asymmetrical encryption, this is the private key and the knowledge of the passphrase used to unlock it (if used) that will be needed. In consequences this needed information --- the private key --- must be stored outside the backup (in your head for a passphrase, or in a unciphered removable media for a private key, for example).

Ciphering backups becomes necessary when using a public cloud provider to store them, or by coherence, when your system itslef is stored on ciphered volumes (LUKS for example).

Direct Attached Storage (DAS)

For direct attached storage (DAS), like local disk, key, or legacy DVDs, there is no difficulties. You will probably want to adapt the -s/-S options to a divisor of the media size, eventually adding parity data for when low end media are used. (just add the word par2 on command-line or in .darrc)

Network Attached Storage (NAS)

Of course a network access need to be setup before being able to restore your data. The rescue system must also support one of the network protocols available with your NAS to access your backups. For protocols other than FTP and SFTP, a temporary local storage may be needed and thus slicing dar backups (see -s option) will be very useful to be able to perform a restoration without requiring an very large temporary local disk. In addition you can automate the downloading of slices from dar by mean of -E option. But, when using FTP or SFTP, dar can read the backup directly from the NAS and thus absolutely no local temporary storage is required for restoration in that case.

Partitions

Dar is partitions independent so we will have to recreate them before restoration starts: At no time you have to recreate the exactly same layout of partitions: if you know some partitions were nearly saturated or oversized, you can take the opportunity of the restoration to review the partition sizes, or even reconsider a completely different partition/disk layout (for example, splitting /var from / in a separated partition for example, or putting some partitions together if it makes better sense), or go to encrypted LUKS disks, LVM, and so on.

UEFI Boot

UEFI boot uses an EFI partition (vfat formatted) where are stored binaries for the different operating systems present in the host. This partition is only used before the Linux system is started but it is mounted afterward under /boot/efi when the system has booted, so it can be saved by dar without any effort. We will see a little trick about EFI partition at restoration time.

Legacy MBR boot

Without UEFI, you stick to the legacy MBR boot process, but there is nothing too complicated here: it will just be necessary to re-install the boot loader from the restored system, we will describe that too.

Restoration Process

Booting a pristine system

So you have done and tested your backup as usually and today you need to restore them on a brand-new computer. The proposition is to use System-rescueCD for that. Do not be confused by this name, it can make bootabe CD/DVD, but also bootable USB keys. Knoppix is also an good alternative.

Once systemRescueCD has booted, you get to a shell prompt ready to interpret your commands. For those not having US native keyboards, you can change its layout thanks to the loadkeys command if you skipped the prompt that let you select it:

[root@sysresccd ~]# loadkeys fr [root@sysresccd ~]#

Accessing the backup from the host

In the following we will detail three different ways to access the backup, choose the one best suits your context:

Accessing the backup (DAS context)

In the case of DAS (locak disk, tape, usb key/disk, CD/DVD, floppy(!),...), we can use lsblk to identify the backup partition and or LVM volume. Then we can mount it

[root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]# cd /mnt [root@sysresccd /mnt]# mkdir Backup [root@sysresccd /mnt]# mount /dev/sdb1 Backup [root@sysresccd /mnt]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd /mnt]#

Creating a local temporary storage (NAS context without (S)FTP access)

In the case of Network Storage (NAS) without FTP or SFTP protocol support, we need a local temporary file-system (removed at the end of the restoration process). Here we use lsblk to list all disks, then gdisk to create partition, mkfs to format the file-system and mount it to have it ready for use.

In the below example we use a 32 GB USB key for temporary storage:

[root@sysresccd ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk sdb 8:16 0 32G 0 disk sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]# gdisk /dev/sdb GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: not present BSD: not present APM: not present GPT: not present Creating new GPT entries in memory. Command (? for help): n Partition number (1-128, default 1): First sector (34-67108830, default = 2048) or {+-}size{KMGTP}: Last sector (2048-67108830, default = 67108830) or {+-}size{KMGTP}: Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): Changed type of partition to 'Linux filesystem' Command (? for help): p Disk /dev/sdb: 67108864 sectors, 32.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): 89112323-E1B3-42D7-BB61-8084C1D359F9 Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 67108830 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 2048 67108830 32.0 GiB 8300 Linux filesystem Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/sdb. The operation has completed successfully. [root@sysresccd /mnt]# mkfs.ext4 /dev/sdb1 mke2fs 1.45.0 (6-Mar-2019) Discarding device blocks: done Creating filesystem with 8388347 4k blocks and 2097152 inodes Filesystem UUID: c7ee69b8-89f4-4ae3-92cb-b0a9e41a5fa8 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done [root@sysresccd ~]# cd /mnt [root@sysresccd /mnt]# mkdir Backup [root@sysresccd /mnt]# mount /dev/sdb1 Backup [root@sysresccd /mnt]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd /mnt]#

you can now fetch each slice dar would request and drop them into that temporary /mnt/Backup directory, removin them afterward. Dar's -E option may be of some use it to automate the process. Assuming you use scp to fetch the slices, you could use the following to instruct dar where to obtain the slices from (for http or https you could usr curl to do something equivalent):

[root@sysresccd /mnt]# cat ~/.darrc <<EOF -E "rm -f /mnt/Backup/%b.*.%e ; scp user@backup-host:/some/where/%b.%N.%e /mnt/Backup" EOF [root@sysresccd /mnt]#

Not that dar will initially require slice number zero, meaning the last slice of the backup, you can make complicated script to handle that, but you can also easily cope with that by manually downloading the last slice in /mnt/Backup, before starting the restoration. dar will find it and will not require it anymore.

If you do not have or want to use a disk for this temporary storage, you can rely on your host memory thanks to a tmpfs file-system:

[root@sysresccd /mnt]# mkdir /mnt/Backup [root@sysresccd /mnt]# mount -t tmpfs -o size=2G tmpfs /mnt/Backup [root@sysresccd /mnt]#

NAS with FTP or SFTP

During the system-rescueCD boot process, you have been asked to provide network information, so we assume you did well and this volatile system has an operational network access (DHCP or not does not matter at this step, whatever is the network configuration of the system we are restoring). If you plan to use FTP or SFTP embedded within dar you do not need to prepare any local temporary storage, just remains the network access to the NAS to validate:

[root@sysresccd ~]# ping 192.168.6.6 PING 192.168.6.6 (192.168.6.6) 56(84) bytes of data. 64 bytes from 192.168.6.6: icmp_seq=1 ttl=64 time=1.33 ms 64 bytes from 192.168.6.6: icmp_seq=2 ttl=64 time=0.667 ms ^C --- 192.168.6.6 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 3ms rtt min/avg/max/mdev = 0.667/0.999/1.332/0.334 ms [root@sysresccd ~]#

It is also possible to validate the FTP or SFTP access availability using the associated credentials with the CLI ftp or sftp command.

Preparing partitions

As stated above, you have a total freedom to create the same or a different partition layout, it will not reduce or impact the ability to restore with dar. This may be the opportunity to use LVM of RAID or SAN, LUKS ciphered volume, or at the opposite to get back to a plain old partition. That's up to you to decide. In the following we will first use plain partition with UEFI boot (and MBR boot), then in the variations part of this document, we will revisit the process using LVM and UEFI, then again with even more stuff: LUKS, LVM and UEFI all at the same time.

the EFI partition

To boot in UEFI a small EFI partition has to be created and vfat formatted. Here we used a size of 1 MiB which is large enough for a single Linux boot host (using grub), but you can find it having sometimes a size of 512 MiB.

[root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]# gdisk /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: not present BSD: not present APM: not present GPT: not present Creating new GPT entries in memory. Command (? for help): n Partition number (1-128, default 1): First sector (34-209715166, default = 2048) or {+-}size{KMGTP}: Last sector (2048-209715166, default = 209715166) or {+-}size{KMGTP}: 4095 Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): ef00 Changed type of partition to 'EFI System' Command (? for help): p Disk /dev/sda: 209715200 sectors, 100.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): F19B9BC1-4DA0-4213-97AD-2E8A4172ADDF Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 209715166 Partitions will be aligned on 2048-sector boundaries Total free space is 209713085 sectors (100.0 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF00 EFI System Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/sda. The operation has completed successfully. [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk `-sda1 8:1 0 1M 0 part sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]#

The root partition

Here we will use a single partition to restore the system to, but you are free to use as many as you want (and also use LVM instead of partitions if you prefer. See the variations part at the end of this document).

[root@sysresccd ~]# gdisk /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Command (? for help): n Partition number (2-128, default 2): First sector (34-209715166, default = 4096) or {+-}size{KMGTP}: Last sector (4096-209715166, default = 209715166) or {+-}size{KMGTP}: +80G Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): Changed type of partition to 'Linux filesystem' Command (? for help): p Disk /dev/sda: 209715200 sectors, 100.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): F19B9BC1-4DA0-4213-97AD-2E8A4172ADDF Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 209715166 Partitions will be aligned on 2048-sector boundaries Total free space is 41940925 sectors (20.0 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF00 EFI System 2 4096 167776255 80.0 GiB 8300 Linux filesystem Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/sda. The operation has completed successfully. [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part `-sda2 8:2 0 80G 0 part sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]#

A swap space

It is always a good idea to have a swap space, either as a swap file or better, as one or several swap partitions (not especially a big one, depending on your needs). Follows the creation of a 1 GiB swap partition:

[root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part `-sda2 8:2 0 80G 0 part sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]# gdisk /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Command (? for help): n Partition number (3-128, default 3): First sector (34-209715166, default = 167776256) or {+-}size{KMGTP}: Last sector (167776256-209715166, default = 209715166) or {+-}size{KMGTP}: +1G Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): 8200 Changed type of partition to 'Linux swap' Command (? for help): p Disk /dev/sda: 209715200 sectors, 100.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): F19B9BC1-4DA0-4213-97AD-2E8A4172ADDF Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 209715166 Partitions will be aligned on 2048-sector boundaries Total free space is 39843773 sectors (19.0 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF00 EFI System 2 4096 167776255 80.0 GiB 8300 Linux filesystem 3 167776256 169873407 1024.0 MiB 8200 Linux swap Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/sda. The operation has completed successfully. [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 80G 0 part `-sda3 8:3 0 1G 0 part sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]#

Formatting File-systems

swap partition

In order to be usable we have to format all the partitions we just created, let's start with the swap partition:

[root@sysresccd ~]# gdisk -l /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sda: 209715200 sectors, 100.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): F19B9BC1-4DA0-4213-97AD-2E8A4172ADDF Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 209715166 Partitions will be aligned on 2048-sector boundaries Total free space is 39843773 sectors (19.0 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF00 EFI System 2 4096 167776255 80.0 GiB 8300 Linux filesystem 3 167776256 169873407 1024.0 MiB 8200 Linux swap [root@sysresccd ~]# mkswap /dev/sda3 Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes) no label, UUID=51f75caa-6cf3-421f-a18a-c58e77f61795 [root@sysresccd ~]#

In option we can even use this swap partition right now for the current rescue system, this may be interesting especially if you used a tmpfs file-system as temporary local storage:

[root@sysresccd ~]# free total used free shared buff/cache available Mem: 8165684 84432 139112 95864 7942140 7680632 Swap: 0 0 0 [root@sysresccd ~]# swapon /dev/sda3 [root@sysresccd ~]# free total used free shared buff/cache available Mem: 8165684 85372 137976 95864 7942336 7679804 Swap: 1048572 0 1048572 [root@sysresccd ~]#

Root file-system

Nothing tricky here:

[root@sysresccd ~]# gdisk -l /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sda: 209715200 sectors, 100.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): F19B9BC1-4DA0-4213-97AD-2E8A4172ADDF Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 209715166 Partitions will be aligned on 2048-sector boundaries Total free space is 39843773 sectors (19.0 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF00 EFI System 2 4096 167776255 80.0 GiB 8300 Linux filesystem 3 167776256 169873407 1024.0 MiB 8200 Linux swap [root@sysresccd ~]# mkfs.ext4 /dev/sda2 mke2fs 1.45.0 (6-Mar-2019) Discarding device blocks: done Creating filesystem with 20971520 4k blocks and 5242880 inodes Filesystem UUID: ec6319f3-789f-433d-a983-01d577e3e862 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000 Allocating group tables: done Writing inode tables: done Creating journal (131072 blocks): done Writing superblocks and filesystem accounting information: done [root@sysresccd ~]#

We will mount this partition to be able to restore data into it:

[root@sysresccd ~]# mkdir /mnt/R [root@sysresccd ~]# mount /dev/sda2 /mnt/R [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 80G 0 part /mnt/R `-sda3 8:3 0 1G 0 part [SWAP] sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]#

EFI Partition

the EFI partition is a vfat partition that is usually mounted under /boot/efi after the system has booted. So we will format it and mount it there under /mnt/R, where we have temporarily mounted the future root file-system.

If you use the legacy MBR booting process in your original system, you just have to skip this EFI partition step: when reinstalling grub, the MBR will be setup as expected.

[root@sysresccd ~]# mkfs.vfat -n UEFI /dev/sda1 mkfs.fat 4.1 (2017-01-24) [root@sysresccd ~]# cd /mnt/R [root@sysresccd /mnt/R]# mkdir -p boot/efi [root@sysresccd /mnt/R]# mount /dev/sda1 boot/efi [root@sysresccd /mnt/R]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part /mnt/R/boot/efi |-sda2 8:2 0 80G 0 part /mnt/R `-sda3 8:3 0 1G 0 part [SWAP] sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part /mnt/Backup sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd /mnt/R]#

Restoring data with dar

All is ready to receive the data, so we run dar, here below in the case of a DAS or NAS without (S)FTP protocols:

root@sysresccd ~]# cd /mnt/Backup root@sysresccd /mnt/Backup]# ls -al total 3948 drwxr-xr-x 4 root root 4096 Oct 4 14:49 . drwxr-xr-x 1 root root 80 Oct 4 15:35 .. -rwxr-xr-x 1 root root 4017928 Oct 4 14:49 dar_static drwx------ 2 root root 16384 Oct 4 13:49 lost+found drwxr-xr-x 2 root root 4096 Oct 4 15:00 soleil-full-2020-09-16 [root@sysresccd /mnt/Backup]# ./dar_static -x soleil-full-2020-09-16/soleil-full-2020-09-16 -R /mnt/R -X "lost+found" -w Archive soleil-full-2020-09-16 requires a password: Warning, the archive soleil-full-2020-09-16 has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted -------------------------------------------- 62845 inode(s) restored including 11 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 0 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 62845 -------------------------------------------- EA restored for 1 inode(s) FSA restored for 0 inode(s) -------------------------------------------- [root@sysresccd /mnt/Backup]#

For a NAS with SFTP or FTP this is even simpler, though we have to download dar_static first

[root@sysresccd ~]# scp denis@192.168.6.6:/mnt/Backup/dar_static . The authenticity of host '192.168.6.6 (192.168.6.6)' can't be established. ECDSA key fingerprint is SHA256:6l+YisP2V2l82LWXvWb1DFFYEkzxRex6xmSoY/KY2YU. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.6.6' (ECDSA) to the list of known hosts. denis@192.168.6.6's password: dar_static 100% 3924KB 72.7MB/s 00:00 [root@sysresccd ~]# ./dar_static -x sftp://denis@192.168.6.6/mnt/Backup/Soleil/soleil-full-2020-09-16/soleil-full-2020-09-16 -R /mnt/R -X "lost+found" -w Please provide the password for login denis at host 192.168.6.6: Archive soleil-full-2020-09-16 requires a password: Warning, the archive soleil-full-2020-09-16 has been encrypted. A wrong key is not possible to detect, it would cause DAR to report the archive as corrupted -------------------------------------------- 62845 inode(s) restored including 11 hard link(s) 0 inode(s) not restored (not saved in archive) 0 inode(s) not restored (overwriting policy decision) 0 inode(s) ignored (excluded by filters) 0 inode(s) failed to restore (filesystem error) 0 inode(s) deleted -------------------------------------------- Total number of inode(s) considered: 62845 -------------------------------------------- EA restored for 1 inode(s) FSA restored for 0 inode(s) -------------------------------------------- [root@sysresccd /mnt/Backup]#

Adaptation of the restored data

The UUID of the different filesystem and swap space have been recreated, if the restored /etc/fstab points to file-system based on their UUID we have to adapt it to their new UUID. The blkid let you grab the UUID of file-system we created including the swap partition, so we can edit /mnt/R/etc/fstab (using vi or joe both available from systemrescueCD).

If your system is booting by mean of an initramfs, you should also check and eventually edit the restored /mnt/R/etc/initramfs-tools/conf.d/resume with the new UUID of the swap partition.

Note: that we can also look for the original UUID and when creating filesystems (formating them) provide the same UUID as the one used on the backed up system for each of them. This implies you have saved the information provided by blkid within the backup. See -i option of mkfs program to provide the UUID the filesystem should be created with. Both methods are valid, the later does not then require to adapt the restored data.

If, like me, you like none of these editors but prefer emacs for example for its ability to run an embedded shell and copy&past between the shell running blkid and the fstab file you are editing, assuming you have it ready for use in the system under restoration, you can delay this edition of fstab to the time we will have chrooted, see below.

Note that the root file-system UUID has no importance as we will regenerate the ramdisk and grub configuration file based on its new UUID. However if you have more partitions than the few we had in this example, /mnt/R/etc/fstab should be updated with their new UUID or /dev/ path accordingly

[root@sysresccd ~]# blkid /dev/sda1: SEC_TYPE="msdos" LABEL_FATBOOT="UEFI" LABEL="UEFI" UUID="CB52-4920" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="edb894df-e58f-4590-a167-bf5b9025a691" /dev/sda2: UUID="ec6319f3-789f-433d-a983-01d577e3e862" TYPE="ext4" PARTLABEL="Linux filesystem" PARTUUID="8f707306-e1b5-4019-aabb-0d39da9057be" /dev/sda3: UUID="51f75caa-6cf3-421f-a18a-c58e77f61795" TYPE="swap" PARTLABEL="Linux swap" PARTUUID="d0e52f52-3cd3-4396-8e03-972d9f76af49" /dev/sdb1: UUID="c7ee69b8-89f4-4ae3-92cb-b0a9e41a5fa8" TYPE="ext4" PARTLABEL="Linux filesystem" PARTUUID="15e0fb22-7de7-487c-8a68-ecaa2bb19dd0" /dev/sr0: UUID="2019-04-14-11-35-22-00" LABEL="SYSRCD603" TYPE="iso9660" PTUUID="0d4f1b4a" PTTYPE="dos" /dev/loop0: TYPE="squashfs" [root@sysresccd ~]# vi /mnt/R/etc/fstab [root@sysresccd ~]# vi /mnt/R/etc/initramfs-tools/conf.d/resume

Let's now reinstall the boot loader (grub in our case). To achieve this goal we will chroot into /mnt/R, but as in this chrooted environement we will also need to access the /dev /proc and /sys and if using UEFI boot, the /sys/firmware/efi/efivars file-system we will bind-mount those inside /mnt/R:

[root@sysresccd ~]# mount proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) dev on /dev type devtmpfs (rw,nosuid,relatime,size=4060004k,nr_inodes=1015001,mode=755) run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755) efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime) /dev/sr0 on /run/archiso/bootmnt type iso9660 (ro,relatime,nojoliet,check=s,map=n,blocksize=2048) cowspace on /run/archiso/cowspace type tmpfs (rw,relatime,size=262144k,mode=755) /dev/loop0 on /run/archiso/sfs/airootfs type squashfs (ro,relatime) airootfs on / type overlay (rw,relatime,lowerdir=/run/archiso/sfs/airootfs,upperdir=/run/archiso/cowspace/persistent_SYSRCD603/x86_64/upperdir,workdir=/run/archiso/cowspace/persistent_SYSRCD603/x86_64/workdir,index=off) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=35,pgrp=1,timeout=0,minproto=5,maxproto=5,direct) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M) tmpfs on /tmp type tmpfs (rw,nosuid,nodev) configfs on /sys/kernel/config type configfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) tmpfs on /etc/pacman.d/gnupg type tmpfs (rw,relatime,mode=755) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=816568k,mode=700) /dev/sdb1 on /mnt/Backup type ext4 (rw,relatime) /dev/sda2 on /mnt/R type ext4 (rw,relatime) /dev/sda1 on /mnt/R/boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro) [root@sysresccd ~]#[root@sysresccd ~]# cd /mnt/R [root@sysresccd /mnt/R]# mount --bind /proc proc [root@sysresccd /mnt/R]# mount --bind /sys sys [root@sysresccd /mnt/R]# mount --bind /dev dev [root@sysresccd /mnt/R]# mount --bind /run run [root@sysresccd /mnt/R]# mount --bind /sys/firmware/efi/efivars sys/firmware/efi/efivars [root@sysresccd /mnt/R]# chroot . /bin/bash root@sysresccd:/#

If not done previously you can now edit /etc/fstab with your favorite text editor available in the system under restoration. Then we can reinstall grub and rebuild the initram (if used), and exit the chrooted environment.

root@sysresccd:/# export PATH=/sbin:/usr/sbin:/bin:$PATH root@sysresccd:/#update-initramfs -u update-initramfs: Generating /boot/initrd.img-4.15.18-21-pve root@sysresccd:/# update-grub Generating grub configuration file ... Found linux image: /boot/vmlinuz-4.15.18-21-pve Found initrd image: /boot/initrd.img-4.15.18-21-pve Found memtest86+ image: /boot/memtest86+.bin Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin done root@sysresccd:/# grub-install Installing for x86_64-efi platform. Installation finished. No error reported. root@sysresccd:~# exit exit [root@sysresccd /mnt/R]#

If you get the following warning when running update-grub, you probably missed to bind-mount /run as described in the previous paragraph:

WARNING: Device /dev/XYZ not initialized in udev database even after waiting 10000000 microseconds.

Checking the motherboard when rebooting

You can restart the system now and remove the systemrescueCD boot device we used for the restoration process.

root@sysresccd /mnt/R]# shutdown -r now

At the first boot, make a halt in the "BIOS" (Press F2 "F9" or "Del" key depending on the hardware) to check that the motherboard points to the correct binary inside the EFI partition of the hard disk, or if using MBR booting process instead, check that the hard disk is in a correct place of boot device list.

Networking Interfaces

Now that the system is back running, the network interface name may have changed depending on the nature of the new hardware. You may have to edit /etc/network/interfaces or equivalent configuration file (/etc/sysconfig/network-scripts/...) if not using automatic tools like network-manager and DHCP protocol for example.



THIS ENDS THE RESTORATION PROCESS. WE WILL NOW SEE SOME VARIATIONS OF THIS PROCESS FOR SOME MORE SPECIFIC CONTEXTS.




Restoring to LVM volumes

You might prefer especially when using Proxmox Virtual Environment to restore to an LVM file-system, having a Logical Volume for the root file-system (the proxmox system) and its swap partition and allocating the rest of the space to a thin-pool for VM to have their block storage.

Note that if you save the proxmox VE as a a normal Debian system, this is fine, but this will not save the VM and containers you had running under Proxmox. However you can save the /var/lib/vz/dump directory where resides the backups of your VM. This assumes you have scheduled a backup process within proxmox VE for these VMs and containers.

Creating partitions and Logical Volumes

Compared to the previous restoration steps, what changes is that you will create only two partitions, the EFI partition and a LVM partition:

[root@sysresccd ~]# gdisk /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Command (? for help): n Partition number (1-128, default 1): First sector (34-209715166, default = 2048) or {+-}size{KMGTP}: Last sector (2048-209715166, default = 209715166) or {+-}size{KMGTP}: 4095 Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): ef00 Changed type of partition to 'EFI System'
Command (? for help): n Partition number (2-128, default 2): First sector (34-209715166, default = 4096) or {+-}size{KMGTP}: Last sector (4096-209715166, default = 209715166) or {+-}size{KMGTP}: Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): 8e00 Changed type of partition to 'Linux LVM'
Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/sda. The operation has completed successfully. [root@sysresccd ~]# gdisk -l /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sda: 209715200 sectors, 100.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): F19B9BC1-4DA0-4213-97AD-2E8A4172ADDF Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 209715166 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF00 EFI System 2 4096 209715166 100.0 GiB 8E00 Linux LVM [root@sysresccd ~]#

formatting the partitions and volumes

The formatting of the EFI partition has been seen, so we will not detail it here, but it must be done now, in order for the following steps to succeed.

Remains the LVM related stuff to setup:

[root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part `-sda2 8:2 0 100G 0 part sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]# pvcreate /dev/sda2 Physical volume "/dev/sda2" successfully created. [root@sysresccd ~]# vgcreate soleil /dev/sda2 Volume group "soleil" successfully created [root@sysresccd ~]# lvcreate -L 9G soleil -n rootfs Logical volume "rootfs" created. [root@sysresccd ~]# lvcreate -L 1G soleil -n swap Logical volume "swap" created. [root@sysresccd ~]# mkswap /dev/mapper/soleil-swap Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes) no label, UUID=8aa8e971-3aea-4357-8723-dbc9392bacf8 [root@sysresccd ~]# swapon /dev/mapper/soleil-swap [root@sysresccd ~]# mkfs.ext4 /dev/mapper/soleil-rootfs mke2fs 1.45.0 (6-Mar-2019) Discarding device blocks: done Creating filesystem with 2359296 4k blocks and 589824 inodes Filesystem UUID: 65561197-1e85-498d-9127-bb8f4bc142ac Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Allocating group tables: done Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done [root@sysresccd ~]#

Now that all partitions are created as previously, we can mount them to get ready for dar restoration:

[root@sysresccd ~]# cd /mnt [root@sysresccd /mnt]# mkdir R [root@sysresccd /mnt]# mount /dev/mapper/soleil-rootfs R [root@sysresccd /mnt]# cd R [root@sysresccd /mnt/R]# mkdir -p boot/efi [root@sysresccd /mnt/R]# mount /dev/sda1 boot/efi [root@sysresccd /mnt/R]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part /mnt/R/boot/efi `-sda2 8:2 0 100G 0 part |-soleil-rootfs 254:0 0 9G 0 lvm /mnt/R `-soleil-swap 254:1 0 1G 0 lvm [SWAP] sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd /mnt/R]#

Restoring the data with dar

By default in proxmox the /var/liv/vz is in the root filesystem. we could restore as described above, but it may also be interesting to do else: by creating a thin-pool and using a thin-volume inside it for /var/lib/vz in order to not saturate the proxmox system with backups while not dedicating a whole partition for it, but sharing this space with VM volumes.

Creating a thin-pool

Creating a thin pool is done in three steps.

[root@sysresccd /mnt/R]# lvcreate -n metadata -L 300M soleil Logical volume "metadata" created. [root@sysresccd /mnt/R]# lvcreate -n pooldata -L 80G soleil Logical volume "pooldata" created. [root@sysresccd /mnt/R]# lvconvert --type thin-pool --poolmetadata soleil/metadata soleil/pooldata Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data. WARNING: Converting soleil/pooldata and soleil/metadata to thin pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Do you really want to convert soleil/pooldata and soleil/metadata? [y/n]: y Converted soleil/pooldata and soleil/metadata to thin pool. [root@sysresccd /mnt/R]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part /mnt/R/boot/efi `-sda2 8:2 0 100G 0 part |-soleil-rootfs 254:0 0 9G 0 lvm /mnt/R |-soleil-swap 254:1 0 1G 0 lvm [SWAP] |-soleil-pooldata_tmeta 254:2 0 300M 0 lvm | `-soleil-pooldata 254:4 0 80G 0 lvm `-soleil-pooldata_tdata 254:3 0 80G 0 lvm `-soleil-pooldata 254:4 0 80G 0 lvm sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd /mnt/R]#

Using the thin-pool for /var/lib/vz

The thin-pool is created we can thus now use it to create a Virtual Logical Volume, in other words a volume that consumes of the thin-pool data only what it really needs, sharing its free space with other thin volumes of the this thin-pool (see also discard directive while mounting file-systems or the fstrim system command).

[root@sysresccd /mnt/R]# lvcreate -n vz -V 20G --thinpool pooldata soleil Logical volume "vz" created. [root@sysresccd /mnt/R]# mkfs.ext4 /dev/mapper/soleil-vz mke2fs 1.45.0 (6-Mar-2019) Discarding device blocks: done Creating filesystem with 5242880 4k blocks and 1310720 inodes Filesystem UUID: a2284c87-a0c9-419f-ba19-19cb5df46d4a Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done [root@sysresccd /mnt/R]# mkdir -p var/lib/vz [root@sysresccd /mnt/R]# mount /dev/mapper/soleil-vz var/lib/vz [root@sysresccd /mnt/R]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 100G 0 disk |-sda1 8:1 0 1M 0 part /mnt/R/boot/efi `-sda2 8:2 0 100G 0 part |-soleil-rootfs 254:0 0 9G 0 lvm /mnt/R |-soleil-swap 254:1 0 1G 0 lvm [SWAP] |-soleil-pooldata_tmeta 254:2 0 300M 0 lvm | `-soleil-pooldata-tpool 254:4 0 80G 0 lvm | |-soleil-pooldata 254:5 0 80G 0 lvm | `-soleil-vz 254:6 0 20G 0 lvm /mnt/R/var/lib/vz `-soleil-pooldata_tdata 254:3 0 80G 0 lvm `-soleil-pooldata-tpool 254:4 0 80G 0 lvm |-soleil-pooldata 254:5 0 80G 0 lvm `-soleil-vz 254:6 0 20G 0 lvm /mnt/R/var/lib/vz sdb 8:16 0 32G 0 disk `-sdb1 8:17 0 32G 0 part sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd /mnt/R]#

Now we can restore using dar the same as we did above without LVM. The VM backup will go into the thin-volume and the rest of the proxmox system will be preserved in its logical volume from the activity of the VM and their backups, while the content of the EFI partition will be also restored.

[root@sysresccd /mnt/R]# cd /mnt/Backup [root@sysresccd /mnt/Backup]# ./dar_static -x soleil-full-2020-09-16/soleil-full-2020-09-16 -R /mnt/R -X "lost+found" -w [...]

Once dar has completed, you will have to adapt /mnt/R/etc/fstab for both UUID if they were used, and /dev/sdX that may become /dev/mapper/<vgname>-<volume-name>, if moving several partitions to LVM volumes or changing VG and LVM names. Here as we split the content of /var/lib/vz to a dedicated thin-volume, we will have to add a new line in fstab for this volume to be mounted at system startup time:

[root@sysresccd /mnt/R]# echo "/dev/mapper/soleil-vz /var/lib/vz ext4 default 0 2" >> /mnt/R/etc/fstab

The end of the process is the same as above, by chrooting and reinstalling grub.

Proxmox Specific

As we did not saved nor restored the block devices of VM (the thin-pool) but just have their backup restored in /var/lib/vz/dump we need to remove the VM referred in the proxmox database (which do not exist anymore) and restore them from their backups

root@soleil:~# for vm in `qm list | sed -rn 's/\s+([0-9]+).*/\1/p'` ; do qm set $vm --protect no ; qm destroy $vm ; done ... root@soleil:~# qm list root@soleil:~#

Now from the proxmox GUI you can restore all the VM and containers from their Backups. If not using LVM but Ceph or other shared and distributed file-system, this task vanishes as the block storage of VM is still present in the distributed storage cluster. How now to add the local storage to this Ceph cluster is out of the scope of this document.

Restoring a LUKS ciphered disk

When restoring with dar, you may take the opportunity to restore to a ciphered disk, even if the original system was not. You may also have backed up a ciphered system so we end to the same point we will have to restore the system into a ciphered disk.

For simplicity we will restore an LVM inside a ciphered system, but the exercice is pretty similar to restore an LVM and have some Logical Volume being LUKS ciphered "devices". The advantage of LVM inside LUKS is simplicity, the advantage of LUKS inside LVM is performance when you do not want to have all volumes ciphered (for example a /var/spool/proxy which holds public data, the content of a public ftp server, and so on, do not worth ciphering).

As seen previously, the EFI partition cannot be part of an LVM, it cannot be neither ciphered as to read a ciphered volume, the kernel must be loaded and running. The second consequence is that the kernel and the mandatory initramfs must not reside in a ciphered partition. If LUKS can prevent your data from be exposed to a thief, however if someone has a physical access to your computer and if this later one is not running 24/7, LUKS alone cannot prevent one to modify the kernel and ramdisk image used to boot, introducing some keylogger or other spying tool that will catch the secret key you need to enter at boot time in order to uncipher your LUKS disk. This is the role of the secure boot process, which we will not describe here today (maybe in a future revision of this document) to detect and prevent such type of attack.

So we have to create an EFI partition, an unciphered boot partition and an partition that will be ciphered and which will contain the LVM (root, home, swap space for example). With the same commands we used above, here is what partitionning we should get:

root@sysresccd ~]# gdisk /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Command (? for help): p Disk /dev/sda: 67108864 sectors, 32.0 GiB Model: QEMU HARDDISK Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): 23033D82-7166-4282-AEF9-F2CC18453F1C Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 67108830 Partitions will be aligned on 2048-sector boundaries Total free space is 4029 sectors (2.0 MiB) Number Start (sector) End (sector) Size Code Name 1 2048 1050623 512.0 MiB EF00 EFI Partition 2 1050624 1550335 244.0 MiB 8300 Linux Boot 3 1550336 67106815 31.3 GiB 8300 LUKS Device Command (? for help): q [root@sysresccd ~]#

We will format the EFI partition the same way we did above, format the Linux boot with an ext4 filesystem as we also did above. What is new here is the LUKS Device we have first to initialize as a LUKS volume. The volume contains some metadata (ciphered keys, token,...) that have to be created first (and only once):

[root@sysresccd ~]# cryptsetup luksFormat /dev/sda3 WARNING! ======== This will overwrite data on /dev/sda3 irrevocably. Are you sure? (Type uppercase yes): YES Enter passphrase for /dev/sda3: Verify passphrase: [root@sysresccd ~]#

of course if you forget the provided passphrase, you will lose all data stored in that volume. Note that this key can be changed without having to rebuild or recipher the whole volume, we will see that further. Now we can open the volume, which mean have the Linux kernel aware of the master key and able to cipher/uncipher data written to or read from this device:

[root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 32G 0 disk |-sda1 8:1 0 512M 0 part |-sda2 8:2 0 244M 0 part `-sda3 8:3 0 31.3G 0 part sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]# cryptsetup open /dev/sda3 crypted_part Enter passphrase for /dev/sda3: [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 32G 0 disk |-sda1 8:1 0 512M 0 part |-sda2 8:2 0 244M 0 part `-sda3 8:3 0 31.3G 0 part `-crypted_part 254:0 0 32G 0 crypt sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]#

The rest is straight forward, we have now a /dev/mapper/crypted_part we can use as Physical Volume for LVM

[root@sysresccd ~]# pvcreate /dev/mapper/crypted_part Physical volume "/dev/mapper/crypted_part" successfully created. [root@sysresccd ~]# vgcreate vgname /dev/mapper/crypted_part Volume group "vgname" successfully created [root@sysresccd ~]# lvcreate -n root -L 10G vgname Logical volume "root" created. [root@sysresccd ~]# lvcreate -n home -L 8G vgname Logical volume "home" created. [root@sysresccd ~]# lvcreate -n swap -L 1G vgname Logical volume "swap" created. [root@sysresccd ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home vgname -wi-a----- 8.00g root vgname -wi-a----- 10.00g swap vgname -wi-a----- 1.00g [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 32G 0 disk |-sda1 8:1 0 512M 0 part |-sda2 8:2 0 244M 0 part `-sda3 8:3 0 31.3G 0 part `-crypted_part 254:0 0 32G 0 crypt |-vgname-root 254:1 0 10G 0 lvm |-vgname-home 254:2 0 8G 0 lvm `-vgname-swap 254:3 0 1G 0 lvm sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]#

The following steps are almost identical to what we did earlier:

In a linux system, the /etc/crypttab is read at startup (from the initramfs) to know which volume should be "open" (cryptsetup open as we did above manually). This will lead the system to ask the passphrase to access the ciphered volume.

The /etc/crypttab is structured per line and each one contains 4 fields separated by space:

[root@sysresccd /mnt/R/etc]# echo "crypted_part UUID=4d76e357-f136-4f7e-addc-030436f37682 none luks,discard" > /tmp/R/etc/crypttab [root@sysresccd /mnt/R/etc]#

the rest is exactly the same as we did:

Last, before rebooting you may want to close all that properly, there is a pitfall about LVM on LUKS you have to be aware. To close the LUKS volume the LVM must be disabled inside it, else as the LUKS is busy by LVM you won't be able to close it:

root@sysresccd:~# exit exiting the chroot environment exit [root@sysresccd /mnt/R]# [root@sysresccd /mnt/R]# umount /mnt/R/boot/efi [root@sysresccd /mnt/R]# umount /mnt/R/boot [root@sysresccd /mnt/R]# umount /mnt/R/home [root@sysresccd /mnt/R]# swapoff /dev/mapper/vgname-swap if we activated this swap volume [root@sysresccd /mnt/R]# umount /mnt/R/dev [root@sysresccd /mnt/R]# umount /mnt/R/proc [root@sysresccd /mnt/R]# umount /mnt/R/sys/firmware/efi/efivars [root@sysresccd /mnt/R]# umount /mnt/R/sys [root@sysresccd /mnt/R]# cd / [root@sysresccd /]# umount /mnt/R [root@sysresccd ~]# cryptsetup close crypted_part Device sda3_crypt is still in use. LVM still uses the crypted_part volume [root@sysresccd ~]# vgchange -a n vgname 0 logical volume(s) in volume group "vgname" now active [root@sysresccd ~]# cryptsetup close crypted_part [root@sysresccd ~]# lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 788.8M 1 loop /run/archiso/sfs/airootfs sda 8:0 0 32G 0 disk |-sda1 8:1 0 512M 0 part |-sda2 8:2 0 244M 0 part `-sda3 8:3 0 31.3G 0 part sr0 11:0 1 841M 0 rom /run/archiso/bootmnt [root@sysresccd ~]# shutdown -r now