Post by Scott GaschThanks. It's as you say: when I rebooted with the 13.0 kernel and the 12.2
userland the /var mountpoint (and /home, and /root, and /usr/src, etc...)
sit on the zssd pool which has disappeared. The system comes up but my
home directory is gone and so is root's. I can login but the missing pool
means I can't finish upgrading userland to 13.0.
My first instinct was to restore /home which is when I found that creating
the missing pool anew does not survive a reboot in this state. Apparently
creating an ssd based mirror (or stripe, or single provider pool) with 12.2
zpool and creating some volumes on it with 12.2 zfs then rebooting the
machine causes the newly created pool to vanish again.
I'm considering mounting /var on the HDD zfs pool temporarily and trying to
freebsd-update -r 13.0-RELEASE upgrade again so that I can get userland to
13.0 on the hope that, in that configuration, I'll be able to finish the
update and then maybe restore the missing data from backups.
Thanks for any ideas and suggestions, much appreciated.
Scott
from what you and the others say, what the system misses seem not to be an "SSD-based" pool, but
just a pool(s) other than it boots from. And manual zpool import of a missing pool just works
(zpool.cache format change?). So import it and proceed with `freebsd-update install` as usual.
Also, it is probably worth noting the below change:
[***@myhost:~/freebsd/git/src]{main}$ git log -U -1 --grep zpool.cache
commit a784185078e566103b7f8abffc7c0a4a1e813eb1
Author: Cy Schubert <***@FreeBSD.org>
Date: Thu Aug 27 14:33:46 2020 +0000
/etc/zfs/zpool.cache is the preferred (and new) location of zpool.cache.
Check for it first. Only use /boot/zfs/zpool.cache if the /etc/zfs
version is not found and good.
Reported by: avg
Suggested by: avg, kevans
Notes:
svn path=/head/; revision=364867
diff --git a/libexec/rc/rc.d/zpool b/libexec/rc/rc.d/zpool
index 01028f8633ea..8aab58080a0a 100755
--- a/libexec/rc/rc.d/zpool
+++ b/libexec/rc/rc.d/zpool
@@ -20,9 +20,9 @@ zpool_start()
{
local cachefile
- for cachefile in /boot/zfs/zpool.cache /etc/zfs/zpool.cache; do
+ for cachefile in /etc/zfs/zpool.cache /boot/zfs/zpool.cache; do
if [ -r $cachefile ]; then
- zpool import -c $cachefile -a -N
+ zpool import -c $cachefile -a -N && break
fi
done
}
As I just practised on a vm machine, the /etc/zfs/zpool.cache installed by update seems to be a copy
of the prior /boot/zfs/zpool.cache. But they slightly differ this is what diff of zdb dumps of both
shows (however this is after full update, I missed a step to compare before userland update).
$ diff -u zdb.0 zdb.1
--- zdb.0 2021-04-15 20:04:59.296702000 +0200
+++ zdb.1 2021-04-15 20:05:01.421878000 +0200
@@ -2,10 +2,11 @@
version: 5000
name: 'z2'
state: 0
- txg: 4
+ txg: 165
pool_guid: 2328267395261012403
+ errata: 0
hostid: 1467704166
- hostname: ''
+ hostname: 'v8'
com.delphix:has_per_vdev_zaps
vdev_children: 1
vdev_tree:
@@ -47,9 +48,9 @@
version: 5000
name: 'zroot'
state: 0
- txg: 2172637
+ txg: 2175239
pool_guid: 686285197683857337
- hostid: 1467704166
+ errata: 0
hostname: 'v8'
com.delphix:has_per_vdev_zaps
vdev_children: 1
Anyway, thank you guys for early adopting and giving me very useful pre-warning about what is going
to happen when I start upgrading my production systems :-).