Mind the encryptionroot: How to save your data when ZFS loses its mind
sambowman.tech146 points by 6581 a day ago
146 points by 6581 a day ago
Nice write up and website. I should snapshot my empty root!
If I’m not wrong, at least some of those sharp edges have been resolved. There was a famous very hard to reproduce bug causing problems with ZFS send receive of encrypted snapshots once in a blue moon, that was hunted down and fixed recently.
Still, ZFS needs better tooling. The user has two keys and an encrypted dataset, doesn’t care what is encryption root, and should be able to decrypt. ZFS should send all information required to decrypt.
The code for ZFS encryption hasn’t been updated since the original developer left, last I checked.
In my view, in this case, you could say ZFS nearly lost data: it ties dataset settings in a pool and doesn’t send the necessary settings for reproduction when one of them is replicated. The user is clearly knowledgeable in ZFS and still almost lost data.
Why zfs freak out is accepted as "normal" in a dev environment is beyond me. I use storage spaces on a daily basis in production and dev environment and have for nearly 10 years now and with only marginal use of PowerShell I have been able to restore every array I didn't destroy intentionally. This is the bare minimum I expect out of an redundant array of any type regardless of its speed or scalability promises.
It's not accepted and it's not normal.
This is the case of user changing password setting and and realizing you can't use them with old backups after accidentally destroying one dataset. zfs is intented for servers and sysdmins so it is not as friendly as some may expect, but it did not lose anything that user did not destroy. Author had to use logic to deduct what he did and walk it back.
> changing password setting and and realizing you can't use them with old backups
That's unfair to the author. The backups were new, post-password change. And neither old nor new password worked on them. The thing that was old was an otherwise empty container dataset.
This is a case where both sides are completely understandable and no one did anything wrong. ZFS didn't lose its mind. It worked as designed and intended. The author didn't know a critical detail about the implementation. It's a series of unfortunate events. The only failure could be lack of better ZFS documentation.
What ZFS did is understandable but wrong. Sending an incremental snapshot needs to send updates to the encryption parameters, even if they're inherited from another dataset.
I'm not sure if anybody is wrong or right. But this should be officially documented, a specific error provided- not "permission denied", and a workflow to fix it that doesn't involve patching the driver.
Would the author (or most people) have read the documentation before doing this action? I doubt it.
let's all agree sending incremental data to something that settings were changed, without any error, is a bug
The user has their encrypted data and two encryption keys. They should be able to decrypt. They don’t care about internal ZFS password settings.
I also confirm that people snapshot their data, which is usually child datasets. If you don’t care about an empty folder, snapshotting and replicating it according to a careful schedule is not expected.
OpenZFS has worked fine for me, in mirror mode, for 15 years without anything resembling data loss.
When I had to replace HDDs, the ops were very smooth. I don't mess with ZFS all that often. I rely to the documentation. I must say that IMO the CLI is a breath of fresh air compared to the other options we had in the past (ext3/4FS, ReiserFS, XFS, etc.). Now BTRFS might be easier to work with, I can't tell.
btw, this bug is well known amongst openZFS users. There are quite a few posts about it.
And you can also do so with ZFS. OP has hit a weird issue that normal usage won't ever get.
One that should not exist, of course, but certainly not a normal one.
> Lesson: Test backups continuously so you get immediate feedback when they break.
This is a very old lesson that should have been learned by now :)
But yeah the rest of the points are interesting.
FWIW I rarely use ZFS native encryption. Practically always I use it on top of cryptsetup (which is a frontend for LUKS) on Linux, and GELI on FreeBSD. It's a practice from the time ZFS didn't support encryption and these days I just keep doing what I know.
ZFS encryption is much more space efficient than dmcrypt+unencrypted ZFS when combined with zstd compression. This is because it can do compress-then-encrypt instead of encrypt-then-(not-really-)compress. It is also much much faster.
Source: I work for a backup company that uses ZFS a lot.
Can you explain this in more detail? It doesn't seem true on a first glance.
If you enable compression on ZFS that runs on top of dmcrypt volume, it will naturally happen before encryption (since dmcrypt is the lower layer). It's also unclear how it could be much faster, since dmcrypt generally is bottlenecked on AES-NI computation (https://blog.cloudflare.com/speeding-up-linux-disk-encryptio...), which ZFS has to do too.
Oh my bad. I misread your comment. You are doing ZFS on top of dmcrypt, not dmcrypt images/volumes on top of ZFS.
Using any file system that supports compression on top of LUKS does compression before encryption
I don't use compression anyway. I don't like the way that the storage pool capacity becomes variable then.
I don't understand. You don't like that some things compress better than others, saving a variable amount of space?
I use native ZFS encryption because it makes it super easy to share encrypted datasets across dual-booted operating systems. AFAIK Linux does not support GELI and FreeBSD does not support LUKS. DragonflyBSD supports LUKS but then no ZFS.
Also, that way I can have Linux and FreeBSD living on the same pool, seamlessly sharing my free space, without losing the ability to use encryption. Doing both LUKS and GELI would requiring partitioning and giving each OS its own pool.
I really love ZFS Native encryption, but this is the big problem with it. I use ZFS Raw Sends to store my backups incrementally in a cloud I trust, but not enough to have raw access to my files. ZFS has great attributes there, theoretically - I can send delta updates of my filesystems, and the receiver never has they keys to decrypt them.
I've used this in practice for many years (2020), and aside from encountering exactly this issue (though thankfully I did have a bookmark already in place), it's worked great. I've tested restores from these snapshots fairly regularly (~ quarterly), and only once had an issue related to a migration - I moved the source from one disk to another. This can have some negative effects on encryptionroots, which I was able to solve... But I really, really wish that ZFS tooling had better answers to it, such as being able to explicitly create and break these associations.
Yeah I use different methods for that. I considered using zfs send/receive for backups, however there's one big issue with that: every time you need one or two files from the backup you need to restore the whole filesystem. There's no official way to retrieve a single file from a zfs send stream.
For backup purposes I also greatly prefer file by file encryption because one corruption will only break one file and not the whole backup.
What I do now is encrypt with encfs and store on a S3 glacier-style service.
I've never had to restore a single file that's older than my local snapshots; The restores where I've needed an old subset have been a substantial enough subset that 4-5xing the data size on restore was not really an issue.
I kinda agree with your point on file-by-file encryption, but ZFS's general integrity features are such that I'm not really worried - Except about this article's specific failure mode, which is pretty easy to deal with/avoid when you know about it, but is a substantial deficiency.
It's hard to write a completely automated backup test that's also pretty thorough. Yeah it would have caught "completely umountable" but there's a lot of other problems that a basic script has little hope of catching.
I do manual backup checks, and so did the author, but those are going to be limited in number.
> I very nearly permanently lost 8.5 TiB of data after performing what should've been a series of simple, routine ZFS operations but resulted in an undecryptable dataset. Time has healed the wound enough that I am no longer filled with anguish just thinking about it, so I will now share my experience in the hope that you may learn from my mistakes.
As a zfs user employing encryption, that read like a horror story. Great read, and thanks for the takeaway.
I've used zfs and btrfs and while I haven't quite lost data, I have also hit some unnerving pitfalls / sharp edges that have confirmed that I should keep at least one copy using just LUKS + ext4. I like the features but I think the more complicated filesystems bring about other kind of risks.
I am not sure if this is the correct place but pardon me, I was one trying to remove the luksEncryption key and I searched it on stackoverflow thinking that I am going to figure this out myself...
The first thing on stackoverflow permanently made the data recoverable and it was only under the comment that people mentioned this...
My whole data of projects and what not got lost because of it and that just gave me the lesson of actually reading the whole thing.
I sometimes wonder if using AI would've made any difference or would it have even mattered because I didn't want to use AI and that's why I went to stackoverflow lol... But at the same point, AI makes hallucinations too but it was a good reality check for me as well to always read the whole thing before running commands.
> I sometimes wonder if using AI would've made any difference or would it have even mattered because I didn't want to use AI and that's why I went to stackoverflow lol
AI is trained on stackoverflow and much, much worse support forums. At least SO has the comments below bad advice to warn others, AI will just say "Oops, you're entirely right, I made a mistake and now your data is permanently gone".
Oh yes, I forgot to tell the aftermath,The funny thing is that I actually went to AI after making it unrecoverable and it says that don't worry it can be fixed and gave me commands which gave me hope but did nothing and it never admitted to be honest as those comments
In the end I just asked it to flash it clean so that I can atleast use my HDD which was now in the state of a limbo and it couldn't even do that.
I was just wondering about in my comment if it would have originally given me a different command or not but there are a lot more chances that it would and gaslight me than give me the right command lol
Did you mean "unrecoverable"? I first read your comment as "ok, the solution is trivially easy so the article is unnecessary", but the rest of your comment implies the opposite.
My data did get unrecoverable after running the command that was shown first when i didn't scroll or read about that command more and I just ran it and it just made it unrecoverable.
So yes it got unrecoverable.
And then I just deleted that drive by flashing nix-os in that and trying that for sometimes, so maybe there is good in every bad and I definitely learnt something to always be cautious about what commands you run
AI has told me to do things that would have made my system not bootable. You want a human in the loop for these types of things.
Would it not have been easier to just mount the destroyed old pool or recovering the dataset from the history ring buffer?
zpool import -D
https://openzfs.github.io/openzfs-docs/man/master/8/zpool-im...
I haven't tried this, but I gather from the blog post that it would have been much simpler as it didn't require any of the encryption stuff.
There wasn't a destroyed pool, it's the harder version of trying to rewind time on the filesystem. It's worth trying once the disks are fully backed up, but it's fussy enough that I can understand why they made it plan B.
Thanks for validating my choice to not use raw send/recv. I know not everyone can avoid it, but it also seemed to be a bit prone to this kind of issue.
This is one hell of a bad day, I am impressed at how they were able to solve this.
This all seems unbelievably more complicated and prone to failure than just doing luks over mdadm. You could just skip this weird, arcane process by imaging the disks, walking them to where they needed to be, then slapping them into the other machine and mounting them as normal.
I do not understand making RAID and encryption so very hard, and then using some NAS in a box distribution like an admission you don't have the skills to handle it. A lot of people are using ZFS and "native encryption" on Archlinux (not in this case) when they should just be using mdadm and luks on Debian stable. It's like they're overcomplicating things in order to be able to drop trendy brand names around other nerds, then often dramatically denouncing those brand names when everything goes wrong for them.
If you don't have any special needs, and you don't know what you're doing, just do it the simple way. This all just seems horrific. I've got >15 year old mdadm+luks arrays that have none of their original disks, are 5x their original disk size, have survived plenty of failures, and aren't in their original machines. It's not hard, and dealing with them is not constantly evolving.
Reading this gives me childhood anxiety from when I compressed by dad's PC with a BBS pirated copy of Stacker so I would have more space for pirated Sierra games, it errored out before finishing, and everything was inaccessible. I spent from dusk to dawn trying to figure out how to fix it (before the internet, but I was pretty good at DOS) and I still don't know how I managed it. I thought I was doomed. Ran like a dream afterwards and he never found out.
There are very real reasons to use ZFS instead of the oldschool Linux block device sandwich. mdadm+luks+lvm still do not quite provide the same set of features that ZFS alone does even without encryption. Namely in-line compression, and data checksumming, not to mention free snapshots.
ZFS is quite mature, the feature discussed in the article is not. As others have pointed out this could have been avoided by running ZFS on top of luks and would have hardly sacrificed any functionality.
> mdadm+luks+lvm still do not quite provide the same set of features that ZFS alone does even without encryption. Namely in-line compression, and data checksumming, not to mention free snapshots.
Sure, but LUKS+ZFS provides all that too, and also encrypts everything (ZFS encryption, surprisingly, does not encrypt metadata).
As this article demonstrates, encryption really is an afterthought with ZFS. Just as ZFS rethought from first principles what storage requires and ended up making some great decisions, someone needs to rethink from first principles what secure storage requires.
> Namely in-line compression, and data checksumming, not to mention free snapshots.
You get these for free with btrfs
It's a little weird to denounce the "block device sandwich" and then say that they should have used... a variation of the block device sandwich.
> There are very real reasons to use ZFS
I feel like, for the types of person GP is talking about, they likely don't really need to use ZFS, and luks+md+lvm would be just fine for them.
Like the GP, I have such a setup that's been in operation for 15-20 years now, with none of the original disks, probably 4 or 5 full disk swaps, starting out as a 4x 500GB array, which is now a 5x 8TB array. It's worked perfectly fine, and the only times I've come close to losing data is when I have done something truly stupid (that is, directly and intentionally ignored the advice of many online tutorials)... and even then, I still have all my data.
Honestly the only thing missing that I wish I had was data checksumming, and even then... eh.
Run enough disks long enough and you'll find one that starts returning garbage while telling the OS everything is ok.
First time I had it happen was on a hardware raid device and a company lost 2 and a half days worth of data as any backups from when it started had bad data.
The next time I had it happen is using ZFS and we saw a flood of checksum errors and replaced the disk. Even after that SMART thought it was perfectly fine and you could send commands to it, you just got garbage back.
How do you know you’ve lost no data? Do you checksum all your files? Bits gonna rot.
> I do not understand making RAID and encryption so very hard,
I don't use ZFS-native encryption, so I won't speak to that, but in what way is RAID hard? You just `zpool create` with the topology and devices and it works. In fact,
> If you don't have any special needs, and you don't know what you're doing, just do it the simple way. This all just seems horrific. I've got >15 year old mdadm+luks arrays that have none of their original disks, are 5x their original disk size, have survived plenty of failures, and aren't in their original machines. It's not hard, and dealing with them is not constantly evolving.
I would write almost this exact thing, but with ZFS. It's simple, it's easy, it just keeps going through disk replacements and migrations.
I like these writeups. People like to overly complicate their lives, but why? Does this give them wings?
Encryption keys for backups make me nervous. We use restic, password required, and also rsync (no password) high priority drives.