I’ve been toying with the idea to run a variant of illumos for my home server for some time now. I started with Joyent’s SmartOS but that is more a specialized install that is geared for running a hypervisor in a datacenter. The best options seemed to be OpenIndiana which I have played with before and OmniOS CE. I decided to go for OmniOS as I saw it had both bhyve support and lx zone support. I have always been a fan of how zones worked in Solaris 10 when I used to use that, which alongside ZFS were some of the “holy crap” features compared to what I had been used to with GNU/Linux. Now, ZFS finally has decent support in most brands of GNU/Linux, but they never got the great zone feature. When using virtualization on my Gentoo machine, I still usually do KVM/QEMU for spinning up a server to use. Zones and the associated utilities make that feel like real caveman stuff.
I am using the omnios-r151030j release as of this writing. According to the documentation, OmniOS CE recommends using bhyve branded zones over KVM for performance. I wanted to migrate
my Tiny Tiny RSS Ubuntu machine over to a zone on the OmniOS server, so I started with the bhyve flavor. I defined the zone as per the example on the
omniosce.org page but used my ubuntu-18.04-live-server-amd64.iso for the installation file system. I started the zone with zoneadm
with another terminal using zlogin -C ubuntu
and I got a blinking cursor that hung around until I halted the zone. After some research, it seems
maybe this kernel was too new for bhyve, so I downloaded ubuntu-16.04.6-server-amd64.iso and tried that. This time, instead of
just a blinking cursor, the zone immediately halted. After some additional research, I zeroed in on switching the bootrom
attr of the zone to BHYVE_RELEASE
instead of
BHYVE_CSM_RELEASE
which got me the ubuntu installation menu. I then had edit the boot menu option to utilize a serial console for installation.
The installation went smoothly, and after the zone rebooted, I got the CD installation menu again. I tried changing the bootorder
attr to dc instead of the default cd, but that also did not
seem to make it boot off the virtual hard drive. Finally, I went into the bhyve EFI menu, and noticed that the virtual hard disk did not appear. I tried adding it manually but that also did not succeed.
After some frustrating hours of trial and error with zonecfg and zoneadm, I found two more pieces that worked. Firstly, I switched the diskif
to ahci
instead of the default virtio
. This made the disk
appear in the EFI menu (although I could never figure out how to use it by default, other than removing the cdrom attr and fs from the zone). The second thing I learned from a closed
zcage GitHub issue that linked to a FreeNAS bug that included copying the
grubx64.efi file into a different location on the boot partition. The last step was to configure grub to use the serial console on boot which was best documented
here. For reference, here is the working zone configuration file I have that boots the bhyve branded zone from disk:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Here is the relevant section of /etc/default/grub
file on the ubuntu zone to boot with the serial console:
1 2 3 |
|
Overall this has been a decent learning experience, but I’ve also played around with some lx branded zones. They also took some interesting tricks to get running, but are much nicer to deal with. They natively support zlogin without needing to specify -C option, as well as not needing a separate zvol for their disk. The one thing I can do so far with a bhyve branded zone is limit vcpus and memory, which I have not managed to figure out yet with lx branded zones.