Farewell Aruba, Hello OVH - A Migration Story

May 15, 2026, 7:22 p.m.
Farewell Aruba, Hello OVH - A Migration Story

Aruba Cloud decided to retire VMware and push everyone onto their OpenStack offering. Fine, that is their call to make. What I am less excited about is the thirty-day deadline, and the part where their OpenStack does not provide a FreeBSD image.

The old VMware side actually had a FreeBSD template. There is BS13-001 - FreeBSD 13, minimal, SSH as root, Fail2ban preconfigured, only port 22 open. A release or two behind, sure, but I do not mind old FreeBSD - I know how to run freebsd-update and migrate to pkg base. The problem is that those templates only live on the platform Aruba is killing. On the OpenStack side, there is no FreeBSD at all.

The offer was: stay on VMware and watch the invoice double for "extra support costs", migrate to Linux, or move out to another provider.

The decision was easy. I moved out. The destination is OVH.

When the entire system already lives on ZFS, migrating a server turns into one of those quietly enjoyable evenings where you mostly watch progress bars and sip coffee (or tea).

Snapshot and ship

First, a recursive snapshot of the root pool on the Aruba box:

sudo zfs snapshot -r zroot@ovh

Then ship it to the new OVH machine. Instead of piping straight into zfs receive, I dumped the stream to a single file on the destination:

sudo zfs send -R zroot@ovh | zstd -T0 -19 | ssh freebsd@51.195.110.55 "cat > aruba.zfs"

There is no deep reason for the file detour. At the time, I liked the idea of having the whole filesystem sitting on disk as a single file, in case something went wrong during the import. Piping straight into zfs receive on the other side works just as well and saves the duplicate disk space. But this is how the story went.

zstd -T0 -19 does the heavy lifting on compression. All cores, maximum level, because the bottleneck here is the network, not the CPU.

About ten hours later, the whole filesystem was sitting on the other side as a single file:

$ ls -lah aruba.zfs
-rw-r--r--  1 freebsd freebsd  8.1G May 14 19:50 aruba.zfs

Receiving the pool

OVH gave me one disk and one pool, so replacing the running system with the one in aruba.zfs takes a few extra steps. The real trick is a RAM disk and re-root, but we will get to that. First, decompress and receive the stream into a temporary parent dataset:

$ zfs create zroot/aruba
$ cat aruba.zfs | zstd -dc | zfs recv -dFu zroot/aruba

All datasets from Aruba now sit under zroot/aruba, side by side with the live OVH ones:

$ zfs list
NAME                                        USED  AVAIL  REFER  MOUNTPOINT
[...]
zroot/aruba/tmp                            2.16M  42.1G  2.16M  /tmp
zroot/aruba/usr                            2.98M  42.1G    88K  /usr
zroot/aruba/usr/home                       2.73M  42.1G  2.73M  /usr/home
zroot/aruba/usr/ports                        88K  42.1G    88K  /usr/ports
zroot/aruba/usr/src                          88K  42.1G    88K  /usr/src
zroot/aruba/var                            6.52M  42.1G    88K  /var
zroot/aruba/var/audit                        88K  42.1G    88K  /var/audit
zroot/aruba/var/crash                        88K  42.1G    88K  /var/crash
zroot/aruba/var/log                        3.89M  42.1G  3.89M  /var/log
zroot/aruba/var/mail                       2.23M  42.1G  2.23M  /var/mail
zroot/aruba/var/tmp                         144K  42.1G   144K  /var/tmp
zroot/home                                 8.06G  42.1G  8.06G  /home
zroot/tmp                                   116K  42.1G   116K  /tmp
zroot/usr                                  1.64M  42.1G   424K  /usr
zroot/usr/obj                               420K  42.1G   420K  /usr/obj
zroot/usr/ports                             420K  42.1G   420K  /usr/ports
zroot/usr/src                               420K  42.1G   420K  /usr/src
zroot/var                                  2.76M  42.1G   424K  /var
zroot/var/audit                             428K  42.1G   428K  /var/audit
zroot/var/crash                             104K  42.1G   104K  /var/crash
zroot/var/log                               424K  42.1G   424K  /var/log
zroot/var/mail                                1M  42.1G     1M  /var/mail
zroot/var/tmp                               424K  42.1G   424K  /var/tmp
[...]

Re-rooting onto RAM

Now the fun part: replace the OVH datasets with the Aruba ones. We cannot do that from the running system - it is sitting on the very datasets we want to move. So I copy a minimal userland onto a RAM disk and re-root into it, then do the surgery from there.

rsync builds the temporary userland. The exclude list is long, but every entry is either runtime state (/dev, /proc, /tmp), bulk content we will not need in RAM (/usr/src, /usr/ports, /usr/obj, man pages, examples), or things that would just bloat the disk (/var/db/freebsd-update, port snapshots, logs):

$ mdconfig -s 4g
md0
$ newfs /dev/md0
/dev/md0: 4096.0MB (8388608 sectors) block size 32768, fragment size 4096
    using 7 cylinder groups of 625.22MB, 20007 blks, 80128 inodes.
    with soft updates
super-block backups (for fsck_ffs -b #) at:
 192, 1280640, 2561088, 3841536, 5121984, 6402432, 7682880
$ mount /dev/md0 /mnt
$ rsync -aHx \
  --exclude /dev \
  --exclude /proc \
  --exclude /tmp \
  --exclude /mnt \
  --exclude /media \
  --exclude /compat \
  --exclude /usr/src \
  --exclude /usr/ports \
  --exclude /usr/obj \
  --exclude /usr/share/man \
  --exclude /usr/share/doc \
  --exclude /usr/share/examples \
  --exclude /usr/tests \
  --exclude /var/cache \
  --exclude /var/crash \
  --exclude /var/db/freebsd-update \
  --exclude /var/db/freebsd-update/files \
  --exclude /var/db/freebsd-update/install.* \
  --exclude /var/db/portsnap \
  --exclude /var/db/pkg/repos \
  --exclude /var/freebsd-update \
  --exclude /var/log \
  --exclude /var/mail \
  --exclude /var/tmp \
  / /mnt
$ ls -lah /mnt/
total 81 KB
-r--r--r--   1 root wheel  5.9K Apr 30 21:10 COPYRIGHT
drwxr-xr-x   2 root wheel   49B May 11 18:11 bin
drwxr-xr-x  15 root wheel   73B May 15 07:15 boot
dr-xr-xr-x  12 root wheel  512B May 15 07:18 dev
-rw-------   1 root wheel  4.0K May 15 07:18 entropy
drwxr-xr-x  31 root wheel  111B May 14 10:14 etc
drwxr-xr-x   2 root wheel    2B May 11 18:10 home
drwxr-xr-x   8 root wheel   11B May 10 13:33 iocage
drwxr-xr-x   4 root wheel   80B May 11 18:11 lib
drwxr-xr-x   3 root wheel    8B Nov 28 04:11 libexec
drwxr-xr-x   2 root wheel    2B May 14 10:10 media
drwxr-xr-x  17 root wheel  512B May 15 07:18 mnt
drwxr-xr-x   2 root wheel    2B Nov 28 04:11 net
dr-xr-xr-x   2 root wheel    2B Nov 28 04:11 proc
drwxr-xr-x   2 root wheel  152B May 11 18:11 rescue
drwxr-x---   2 root wheel    9B May 15 07:09 root
drwxr-xr-x   2 root wheel  150B May 11 18:11 sbin
drwxrwxrwt   5 root wheel    5B May 15 07:18 tmp
drwxr-xr-x  16 root wheel   16B May 15 07:18 usr
drwxr-xr-x  24 root wheel   24B May 15 07:18 var
drwxr-xr-x   2 root wheel    2B Oct 13  2017 zroot
$ mkdir /mnt/dev

Now let's re-root onto the RAM disk. Re-rooting is the FreeBSD equivalent of saying "reboot, but not really" - keep the kernel running and restart userland against a new root. Which is exactly what we want: drop into a minimal system in RAM where nothing is holding the ZFS datasets open, and rename to our heart's content. The easier path would be to boot from a memdisk image instead, but well - who has the time.

$ kenv vfs.root.mountfrom=ufs:/dev/md0
$ reboot -r
$ df /
Filesystem 1K-blocks    Used  Avail Capacity  Mounted on
/dev/md0     4053532 3312712 416540    89%    /

I gave it a 4GB RAM disk and the system still has about 512MB free after the re-root. Before doing any of this, check how much RAM the machine actually has - this one has 8GB, which leaves comfortable room for both the RAM root and the kernel.

Swapping the datasets

What is left is the boring part: renaming. The OVH datasets get moved under zroot/ovh so I can keep them around in case I need anything from them - they get cleaned up afterwards:

$ zfs unmount -af
$ df -h
Filesystem           Size    Used   Avail Capacity  Mounted on
/dev/md0             3.9G    3.2G    407M    89%    /
devfs                1.0K      0B    1.0K     0%    /dev
/dev/gpt/efiboot0     32M    651K     31M     2%    /boot/efi
$ zfs create zroot/ovh
$ zfs rename zroot/ROOT zroot/ovh/ROOT
$ zfs rename zroot/tmp zroot/ovh/tmp
$ zfs rename zroot/usr zroot/ovh/usr
$ zfs rename zroot/var zroot/ovh/var

The list of top-level datasets that need renaming can be pulled out with a bit of shell magic:

zfs list -H | grep -v ovh | grep -v aruba | awk '{print $1}' | grep '^[^/]*/[^/]*$'

Then mark the OVH side canmount=off so it does not fight us on the next boot:

zfs list -H | awk '{print $1}' | grep ovh | xargs -I@ zfs set canmount=off @

Move the Aruba filesystems back into their proper places under zroot:

zfs list -H | awk '{ print $1 }' | grep aruba | sed 's#zroot/aruba##' | grep '^/[^/]*$' | xargs -I@ zfs rename zroot/aruba@ zroot@

Point the pool at the Aruba boot environment (upgrade-15 is the one that came across):

zpool set bootfs=zroot/ROOT/upgrade-15 zroot

A quick pass over pf.conf, rc.conf and friends, and we are ready to reboot. That is the entire migration story.

Closing words

In hindsight, I could have skipped the "dump to a file" detour entirely - drop straight into the RAM disk and zfs recv -F directly over the existing pool. But somehow this slightly more roundabout process felt safer at the time.

Farewell Aruba Cloud.

Welcome OVH.