jasonwryan.com

Miscellaneous ephemera…

Chroot in LVM on LUKS on Raid: Arch Linux

image

In my previous post describing this setup I made the point that grub won’t be installed and that it is necessary to chroot in to install grub on both drives. This is the procedure I use to do that—and to perform any other maintenance that requires working from a live environment.

Again, most of this information is on the Arch Wiki chroot page, I am just going to fill in the detail around this setup: LVM on LUKS on Raid1.

Once you have booted into your live environment, load the modules that are required; in this case: raid1, dm-mod and dm-crypt.

Check that udev hasn’t helpfully read the superblock of your Raid drives and assembled phantom arrays:

1
cat /proc/mdstat

I find that there are a couple there generally assigned the names /dev/md126 and /dev/md127. These need to be stopped before assembling the correct arrays:

1
2
3
mdadm --stop /dev/md12[67]
mdadm --assemble /dev/md0 /dev/sd[ab]1
mdadm --assemble /dev/md1 /dev/sd[ab]2

You should now have both your arrays up and running. The next step is to unlock your encrypted device:

1
cryptsetup luksOpen /dev/md1 cryptdisk

After entering your passphrase, your device will be unlocked. Next, make the logical volumes available, and then check they are correct:

1
2
vgchange --available y vgroup
lvscan

At this point, you are ready to mount the devices and chroot:

1
2
mkdir /mnt/arch
mount /dev/mapper/vgroup-lvroot /mnt/arch

The next steps are straight from the wiki:

1
2
3
4
cd /mnt/arch
mount -t proc proc proc/
mount -t sysfs sys sys/
mount -o bind /dev dev/

Mount the other parts of the system that you need to use in your recovery – in this case, I need my /boot partition:

1
mount /dev/md0 boot/

Now chroot to the device and define your shell:

1
chroot . /bin/bash

The wiki has some good advice about customizing your prompt to reinforce the fact that you are in a chroot:

1
export PS1="(chroot) $PS1"

With everything mounted, it is just a matter of performing your maintenance. To reinstall grub:

1
2
3
4
5
6
# grub
grub> find /grub/stage1
grub> device (hd0) /dev/sda
grub> root (hd0,0)
grub> setup (hd0)
grub> quit

Repeat for /dev/sdb and both your drives will be bootable in the event that one fails.

With the maintenance accomplished, all that remains is to exit the chroot and unmount the devices cleanly:

1
2
3
4
5
exit
umount {proc,sys,dev,boot...}
cd ..
umount arch/
reboot

Simple.

Creative Commons image by MPD01605

LVM on LUKS on RAID1 on Arch Linux

image

After much procrastination, I finally got around to moving the last of my machines to Arch Linux: my home desktop, which has been running Ubuntu since the end of 2007. I’d like to share a few thoughts about why I finally walked away from Ubuntu, but I’ll save that for another long post.

Anyway, the setup I decided on was reasonably straightforward: two 1TB drives comprising two Raid1 devices, one for a small (150Mb) /boot and the other holding three logical volumes in a LUKS crypt. This way, I have the flexibility to grow the volumes as needed, and only require thesingle passphrase at boot to unlock the enclosed partitions: /root, swap and /home.

Almost all of the information here has been gleaned gratefully from other sources; principally the Arch Wiki entries on LUKS and LVM and this helpful blog post. This post (and the next) are notes to remind me how I got here and, in the follow-up, what I need to do to restore this build…

First up, prepare the two hard drives. This took roughly ~90 hours for the 1TB drives, so allow some time:

1
dd if=/dev/random of=/dev/sd[a,b]

Once the drives are scrubbed (read the Arch Wiki on why that is necessary), you can move to the actual installation business. I used a 256Mb thumb drive with the most recent x86_64 netinstall image burned to it. Boot into the live environment and set up the initial partitions using cfdisk:

1
cfdisk /dev/sda

I created a 150Mb partition for /boot - marked with the boot flag, and then partitoned the rest of the drive. Both partitons should be primary and be FSType linux_raid (the FD option). Write the partition tabel toa file and then import it to the second drive so both are exactly the same:

1
2
sfdisk -d /dev/sda > part-table
sfdisk /dev/sdb < part-table

Now, set up the Raid arrays. Start by loading the required modules and then create the two arrays. The metadata flag is important: if you are using legacy grub, as I did, you must use the 0.90 type or your MBR will get overwritten:

1
2
3
4
modprobe raid1 && modprobe dm-mod
mdadm --create /dev/md0 --level=1 --raid-devices=2 \
    --metadata=0.90 /dev/sd[ab]1
mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sd[ab]2

NOTE: I have split some of the lines of code in this post to ensure they remain readable…

At this point, you can switch to another TTY to watch the disks sync—again, this could take some time, depending on the size of the drives.

1
watch -n1 cat /proc/mdstat

Check that it all went to plan…

1
mdadm --misc --detail /dev/md[01] | less

Once the drives are synched and pronounced clean, it it time to encrypt the second Raid device, dev/md1. You can’t encrypt the array that contains /boot, but having it duplicated in a Raid array means that, ifone drive goes down, you can still boot the other.

Load the module for encryption, and then set it up:

1
2
3
modprobe dm-crypt
cryptsetup --cipher=aes-xts-plain --verify-passphrase \
    --key-size=512 luksFormat /dev/md1

Now, open the encrypted device to create the logical volumes:

1
cryptsetup luksOpen /dev/md1 cryptdisk   [1]

The three next steps create, in this order, the physical volume (the container, if you will), the group and then the individual volumes contained in the group. Choose simple, memorable names and do not hypenate them. The {pv,vg,lv}display commands print out the details of the devices once created.

1
2
3
4
5
6
7
8
9
10
pvcreate /dev/mapper/cryptdisk
pvdisplay

vgcreate vgroup /dev/mapper/cryptdisk
vgdisplay

lvcreate --size 20G --name lvroot vgroup
lvcreate --contiguous y --size 2G --name lvswap vgroup
lvcreate --extents +100%FREE --name lvdata vgroup
lvdisplay

It should be pointed out that the Arch Installer Framework supports LUKS and LVM, so you can accomplish these last two steps from within the installer. I tried both and I found that I had a little more control doing it manually, but YMMV.

At this point, you are ready to enter the installer and complete the install. Run /arch/setup and move through the early setup until you reach the ‘Prepare Hard Drives’ section. Select option 3, ‘Configure block devices, mountpoints and filesystems’. Make sure that the only “raw” device that you configure is /dev/md0, your /boot partition. Otherwise, you are configuring the logical volumes, not the devices they are built on.

After succesfully setting up the drives, install the base packages and then, once that is complete, switch TTYs and update your Raid configuration prior to configuring your system. This means that when your initrd is regenerated, it will inlcude the correct Raid information:

1
mdadm --examine --scan > /mnt/etc/mdadm.conf

When you are done, move to configure your system and make sure to add the relevant details to both your /etc/rc.conf and your /etc/mkinitcpio.conf. In the former:

1
2
USELVM="yes"
USEDMRAID="yes"

and in /etc/mkinitcpio.conf:

1
2
MODULES="... dm_mod dm_crypt aes_x86_64 raid1 ..."
HOOKS="... udev mdadm_udev encrypt lvm2 filesystems ..."  [2]

That is the straightforward part of the install over. The final step is a little more tricky: installing a bootloader. My experience, and I tried a number of times using different approaches, is that this won’t work. The installer doesn’t seem to like the Raid setup, so it complains about it:

1
"Missing/Invalid root device for /dev/md0"

It’s not all bad, as this is the final hurdle and it is not a particularly high one.

The important step is to ensure that your /boot/grub/menu.lst is setup correctly. Given the names I chose above for my devices, mine looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
# (0) Arch
title  Arch Linux
root   (hd0,0)
kernel /vmlinuz26 root=/dev/mapper/vgroup-lvroot \
cryptdevice=/dev/md1:vgroup ro
initrd /kernel26.img

# (1) Arch Fallback
title  Arch Linux Fallback
root   (hd0,0)
kernel /vmlinuz26 root=/dev/mapper/vgroup-lvroot \
cryptdevice=/dev/md1:vgroup ro
initrd /kernel26-fallback.img

Once it is all setup, simply exit the installer. At this point, you have a perfectly secure, redundant, new Arch Linux install that you can’t boot into. You have two choices; chroot in and manually install grub now, or reboot and do it then. I opted for the latter as it would be more like a recovery mission—and I wanted to make sure that I was capable enough to manage that before committing all of my backed-up data to the machine.

I’ll post the details of the relatively straightforward steps required to chroot into the system and install grub on both hard drives in a couple of days.

Notes
  1. The name cryptdisk is arbitrary - just make it memorable and use it consistently…

  2. I am not sure how necessary it is to have the Raid modules included here, given they are explicitly called in /etc/rc.conf, but caution is warranted. UPDATED 12/4/12 as per the mkinitcpio article on the Wiki the mdadm_udev hook is preferred over the plain mdadm hook in your /etc/mkinitcpio.conf.

monsterwm

image

If you have visited here at all over the last couple of weeks, you would have seen some screenshots from my Flickr stream of a new window manager, monsterwm. Over the break, I have been playing around with it and, despite the fact that the project is only really a few weeks old, it is well worth a look.

Originally a fork of dminiwm, which was in turn based on catwm and dwm1, monsterwm for me—having used dwm for years—feels very much like a stripped down version of dwm, with a few of the patches (like pertag, for example) built in.

One of the projects goals, like it’s antecedents, is to keep the SLOC low. By way of comparison, dwm’s long-standing goal has been to keep the SLOC below 20002. monsterwm’s current count is under 700. One of the ways that monsterwm achieves this is not to incorporate a status bar; as the README says:

Monsterwm does not provide a panel and/or statusbar itself. Instead it adheres to the UNIX philosophy and outputs information about the existent desktop, the number of windows on each, the mode of each desktop, the current desktop and urgent hints whenever needed. The user can use whatever tool or panel suits him best (dzen2, conky, w/e), to process and display that information.

So, what is it like to run? Obviously it is fast. It comes with four tiling modes: standard tile, backstack, grid and monocle and, as mentioned above, the ability to define different layouts per desktop. There are still a couple of features to be implemented; floating mode being one of the more significant and a couple of minor bugs—as you would expect for a project still in its relative infancy.

Overall, it is a very nice, minimalist window manager. If you don’t use the dynamic tagging feature of dwm, then this would be a window manager to consider switching to. The developer, Ivan “c00kiemon5ter” Kanakarakis, has been extremely responsive to user’s suggestions and requests (the thread on the Arch boards is quite active) and has been admirably clear about his vision for the wm.

You can see my monsterwm configs in my mercurial repository.

Notes
  1. Updated with the correct genealogy…

  2. “dwm is only a single binary, and its source code is intended to never exceed 2000 SLOC.” dwm.suckless.org

Cookie Monster image from San Diego Shooter.