Running OmniOS under KVM on on-prem Triton, Part 1
The first VM I wanted to build was OmniOS 014. While the system was able to see a virtio disk, the installer was not, so I did the initial installation on a virtual IDE drive.
JSON spec for the installation VM:
{
"alias": "omnios",
"autoboot": "false",
"brand": "kvm",
"ram": 2048,
"vcpus": 3,
"vnc_port": 53241,
"disks": [
{
"boot": true,
"model": "ide",
"size": 10240
}
],
"nics": [
{
"nic_tag": "external",
"model": "virtio",
"ips": ["dhcp"],
"primary": 1
}
]
}
Create the installation VM:
vmadm create -f omnios-kvm.json
UUID=$(vmadm list -Ho uuid alias=omnios)
I'm impatient and want the install to be as fast as possible:
zfs set sync=disabled zones/${UUID}-disk0
Copy the ISO into the zone root where QEMU will be able to see it.
cp isos/OmniOS_Text_r151014.iso /zones/${UUID}/root/omnios.iso
Get the VNC client ready to connect before starting the VM.
vmadm start ${UUID} cdrom=/omnios.iso,ide order=d ; vmadm console ${UUID}
Connect over VNC, hit escape, boot to the ttya
option, then go back to the shell for a more pleasant installation experience.
Hit enter to accept the default keyboard layout.
Once presented the menu, change terminal type to "xterm" (option 4), then start installer and install normally (option 1).
I'm picky about having vmadm console
being maximally usable so rather than immediately rebooting I quit the installer, dropped to a shell, and modified /rpool/boot/grub/menu.lst
to look like so:
default 0
timeout 10
serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
terminal composite
#
title OmniOS v11 r151014
bootfs rpool/ROOT/omnios
kernel$ /platform/i86pc/kernel/amd64/unix -B console=ttya,ttya-mode="115200,8,n,1,-",$ZFS-BOOTFS
module$ /platform/i86pc/amd64/boot_archive
Then I issued a poweroff
command, waited to see the "Press any key to reboot." message, disconnected from the console with Ctrl-]
and issued
vmadm stop $UUID -f
to shut down the VM.
Now to switch to a virtio disk controller. OmniOS will get unhappy if the disk changes controllers (metadata about how to find the disk under /dev/
is stored in the pool), but there's a very simple fix: boot the VM from the ISO again and import the pool (to update the metadata) then export it and shut down:
ZVOL=$(vmadm get ${UUID} | json disks.0.path)
echo '{ "update_disks": [ { "path": "PATH", "model": "virtio" } ] }' | sed "s|PATH|${ZVOL}|" | vmadm update ${UUID}
vmadm start ${UUID} cdrom=/omnios.iso,ide order=d ; vmadm console ${UUID}
Again, connect over VNC and choose the ttya boot option, default keyboard layout, adjust terminal type to "xterm", and drop to a shell.
zpool import -R /fixup rpool ; zpool export rpool
poweroff
```
Force stop the VM, then verify that booting works as expected:
```
vmadm start ${UUID} ; vmadm console ${UUID}
```
Now you have a VM that you can log into via the console. It's not prepared to be integrated with Triton yet, but I'm saving that for my next post. In the meantime, here's a teaser: https://github.com/omniti-labs/omnios-build/pull/101