When I started my career in IT as a data center technician / low level systems admin, I saw more than once a disk replacement in a RAID-5 maintenance, while it was rebuilding, another drive gave up the ghost and guess what happens when you lose two drives concurrently in a RAID-5? *makes explosion sound* So yeah, same logic applies here. Answer from DeputyCartman on reddit.com
🌐
TrueNAS Community
truenas.com › forums › community discussion › community forum › off-topic
What's the stigma behind using RAIDZ1? | TrueNAS Community
July 9, 2023 - Mirrors have way better risk profile as the number of drives go up. Take for example, A 6-drive RAIDZ1 vs 6-drive striped mirrors. In a 6-drive RAIDZ1, you will lose your pool as soon as a second drive goes bad, a 6-drive striped mirrors can suffer up to 3 drive failures so long as the drives aren't in the same vdev.
🌐
Reddit
reddit.com › r/zfs › why it makes sense to use raidz2 instead of raidz
r/zfs on Reddit: Why it makes sense to use RAIDZ2 instead of RAIDZ
Except no. No no no. It’s the reason why I honestly would never touch raid5, but raidz is fine. There’s the massive difference between raid5/6 vs raidz1/z2. A URE during a raid 5 or 6 rebuild means your pool is gone. A URE during a raidz or z2 rebuild means you lose a file or two. This is different universes in terms of impact. And also why you have backups.
Videos
🌐 YouTube ElectronicsWizardry Taking a look at RAIDZ expansion - YouTube
December 2, 2024
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
The problem with RAIDZ or why you probably won't get the storage efficiency you think you will get | Proxmox Support Forum
February 26, 2024 - Click to expand... the problem with RAIDz is that it has the bare minimum of fault tolerance, and is one drive fault away from operating without a safety net. Any "advantages" it has are effectively overshadowed.
🌐
TrueNAS Community
truenas.com › forums › truenas core › general discussion
The problem with RAIDZ | TrueNAS Community
December 13, 2023 - 25% storage efficiency (expected ... and shows what happens if the volblocksize is bigger than the filesize we write. Storage efficiency is very bad, we get IO amplification and fragmentation....
🌐
Server Fault
serverfault.com › questions › 532272 › what-are-the-performance-implications-of-running-vms-on-a-zfs-host
linux - What are the performance implications of running VMs on a ZFS host? - Server Fault

Since ZFS works at a block level the size of the files makes no difference. ZFS requires more memory and CPU but is not inherently significantly slower as a filesystem. Though you need to be aware that RAIDZ is not equivalent in speed to RAID5. RAID10 is fine where speed is a priority.

Answer from JamesRyan on serverfault.com
🌐
Server Fault
serverfault.com › questions › 634197 › zfs-is-raidz-1-really-that-bad
truenas - ZFS - Is RAIDZ-1 really that bad? - Server Fault

Before we go into specifics, consider your use case. Are you storing photos, MP3's and DVD rips? If so, you might not care whether you permanently lose a single block from the array. On the other hand, if it's important data, this might be a disaster.

The statement that RAIDZ-1 is "not good enough for real world failures" is because you are likely to have a latent media error on one of your surviving disks when reconstruction time comes. The same logic applies to RAID5.

ZFS mitigates this failure to some extent. If a RAID5 device can't be reconstructed, you are pretty much out of luck; copy your (remaining) data off and rebuild from scratch. With ZFS, on the other hand, it will reconstruct all but the bad chunk, and let the administrator "clear" the errors. You'll lose a file/portion of a file, but you won't lose the entire array. And, of course, ZFS's parity checking means that you will be reliably informed that there's an error. Otherwise, I believe it's possible (although unlikely) that multiple errors will result in a rebuild apparently succeeding, but giving you back bad data.

Since ZFS is a "Rampant Layering Violation," it also knows which areas don't have data on them, and can skip them in the rebuild. So if your array is half empty you're half as likely to have a rebuild error.

You can reduce the likelihood of these kinds of rebuild errors on any RAID level by doing regular "zpool scrubs" or "mdadm checks"of your array. There are similar commands/processes for other RAID's; e.g., LSI/dell PERC raid cards call this "patrol read." These go read everything, which may help the disk drives find failing sectors, and reassign them, before they become permanent. If they are permanent, the RAID system (ZFS/md/raid card/whatever) can rebuild the data from parity.

Even if you use RAIDZ2 or RAID6, regular scrubs are important.

One final note - RAID of any sort is not a substitute for backups - it won't protect you against accidental deletion, ransomware, etc. Although regular ZFS snapshots can be part of a backup strategy.

Answer from Dan Pritts on serverfault.com
🌐
Reddit
reddit.com › r/proxmox › is raid-z2 really this bad?
r/Proxmox on Reddit: Is Raid-Z2 really this bad?
Check out this parity cost table (switch to RAIDZ2 in the bottom tabs): https://docs.google.com/a/delphix.com/spreadsheets/d/1tf4qx1aMJp8Lo_R6gpT689wTjHv6CGVElrPqTA0w_ZY/edit?pli=1#gid=2126998674 From this article: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz For RAIDZ2 with 8 disks, notice that if your volblocksize is set to the default of 8KiB (2x 4096-byte sectors), the overhead of parity and padding hits the maximum of 200%, i.e. to store 16TiB you need 48TiB of raw disk space. This is very bad. If instead you set volblocksize to 4 sectors (16KiB), overhead drops to only 50%, so 16TiB requires 24TiB to store.
🌐
Reddit
reddit.com › r/zfs › raidz1 vs mirror on nvme
r/zfs on Reddit: RAIDZ1 vs mirror on NVMe
Is RAIDZ1 going to somehow tank performance as opposed to mirroring or is ZFS fast enough to saturate the disks in either config? You shouldn't expect to get the maximum rated speed out of those NVMe disks with ZFS either way. At SATA SSD speeds, the extra housekeeping ZFS does isn't a scale problem, even with itty bitty (x86) CPUs. At NVMe speeds, it starts being a factor. Beyond that... yes, you will see performance differences between RAIDz1 and mirrors, even on NVMe. Particularly when you're running VMs, which usually means small-blocksize operations. That's just not what striped parity RAID is good at. Keep in mind that the real killer stat here isn't throughput, anyway, it's latency. When you want to write a 64K record to mirrors, it goes in 64K chunks, on each disk in the vdev. When you want to write the same record to a 3-disk RAIDz1, it gets split up into two data chunks and one parity chunk. That's actually not too bad on a 3-disk RAIDz1, because you're still talking about 32K chunks, and you don't have any padding needed since 32K is evenly divisible by either 4K or 8K sector sizes. Still, now you're doing 32K ops instead of 64K ops, and you need to complete those ops on three separate devices instead of two before the op is complete. You're also tying up two of those three devices with every 64K read, where the mirror would only need to query one—and, again, you're waiting for both devices to return data before the read is complete, where you were only waiting on one device with the mirror. It's entirely possible that the NVMe will be fast enough to satisfy you either way. But there will absolutely be a difference in performance.
Find elsewhere
🌐
Reddit
reddit.com › r/proxmox › cons to using zfs pool as vm storage?
r/Proxmox on Reddit: Cons to using ZFS pool as VM storage?
There shouldn't be a problem. Are you only going to have one Proxmox node or are you planning on adding others? What is your expansion plan if you wanted to add more storage? Standard raid isn't better IMO, I'd go with ZFS every time unless the platform doesn't support it.
🌐
Reddit
reddit.com › r/sysadmin › zfs mirror vs raidz2 for active vm storage
r/sysadmin on Reddit: ZFS Mirror vs RAIDZ2 for active VM Storage
SLOG helps you when writes are bursty, smoothing out the load so that the performance required from the disks is your average IOPS, not the peak IOPS. If that is your access pattern, that will work, but if the VMs generate a continuous stream of write operations or there isn't enough time between reads to actually get a few writes done, then at some point you will hit a wall. RAIDZ is good for VM storage if the VMs are similar and have low write load, so it is popular with VPS providers: deduplication means that all the servers running the same distribution share the disk space and more important, the in-memory cache, so this allows you to overcommit a lot. The downside of ZFS is indirection: you no longer have a linear relationship between the position a VM thinks the data is at, and the place it actually is at, and this layer of indirection causes additional IOPS to be generated internally. Which approach is better depends on your actual usage pattern: are these proper VMs that see block devices and run their own file system, or are these containers that use the ZFS on the host? Will these be lots of VMs that are largely duplicates so deduplication gains more than it loses? Are typical IOPS large or small? What is the ratio of reads to writes? In a lot of setups, a classic RAID card with support for separate queues in Virtual Functions and battery backed up memory can be a good choice: the VM has low-overhead access to the disks with PCIe passthrough devices, and even dependent writes (like journal updates) can be reordered because the data is considered committed once it has reached the cache, and since everything is based on a few linear mappings, there is no additional administrative overhead for the VM host.
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox backup server › proxmox backup: installation and configuration
[SOLVED] - Hardware Raid or ZFS | Proxmox Support Forum
January 2, 2025 - My hypothesis: zfs is hindered by it's cache and all associated overhead for a continuous stream of files that are referenced only once during a job. I copied a .chunks directory over the network to confirm the benchmark results. To rule out the impact of encryption, I used rsync. The pbs server was configured with the rsync daemon and the pve host pulled the .chunks directory from pbs, writing it to the VMs datastore (a pair of mirrored pcie4.0 NVMe).
🌐
Reddit
reddit.com › r/proxmox › zfs raid vs raidz
r/Proxmox on Reddit: ZFS Raid vs RAIDZ
For VMs I would vote to create the equivalent of a RAID10 setup for the 4 drives. That's two sets of mirrors part of the same pool. Should give you better performance since you're not doing parity calculations. That said, RAIDz1 will give you more space, and maybe enough performance? Depends on what you value more.
🌐
RAIDZ Calculator
raidz-calculator.com › raidz-types-reference.aspx
RAIDZ Types Reference
RAIDZ levels reference covers various aspects and tradeoffs of the different RAIDZ levels.
🌐
TrueNAS Community
forums.truenas.com › resources
The problem with RAIDZ - Resources - TrueNAS Community Forums
April 16, 2024 - This resource was originally created by user: Jamberry on the TrueNAS Community Forums Archive. Please DM this account or comment in this thread to claim it. The problem with RAIDZ or why you probably won’t get the storage efficiency you think you will get.​ Work in progress, probably contains ...
🌐
Reddit
reddit.com › r/datahoarder › you should use mirror vdevs, not raidz.
r/DataHoarder on Reddit: You should use mirror vdevs, not RAIDZ.
May 7, 2014 - RAID is not a backup, and if you persist in thinking of it as "what keeps my data safe" rather than "what keeps my data more available, higher performant, and in larger capacity volumes", you're going to lose that data. Sooner rather than later. Whether you have mirrors, single RAIDZ vdev, multi RAIDZ vdev, hot spares, or anything else, a single bad disk controller can destroy your pool in seconds.
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
ZFS RAIDZ Pool tied with VM disks acts strange | Proxmox Support Forum
January 29, 2025 - (d)RAIDz1/2/3 is also often disappointing for running VMs on as people expect hardware RAID5/6 performance but due to the padding, check-sums and additional features of ZFS it gives much less IOPS.
🌐
JRS Systems
jrs-s.net › 2015 › 02 › 06 › zfs-you-should-use-mirror-vdevs-not-raidz
ZFS: You should use mirror vdevs, not RAIDZ. – JRS Systems: the blog
Keep in mind that if any single vdev fails, the entire pool fails with it. There is no fault tolerance at the pool level, only at the individual vdev level! So if you create a pool with single disk vdevs, any failure will bring the whole pool down. It may be tempting to go for that big storage number and use RAIDZ1…
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
[TUTORIAL] - FabU: Can I use ZFS RaidZ for my VMs? | Proxmox Support Forum
January 1, 2025 - Let us compare that RaidZ2 with six devices: (3a) the RaidZ2 will give us the performance of a single drive and the usable capacity of four drives. Two drives may fail. (3b) the two vdev with triple mirrors gives us the IOPS of two drives for writing data + six fold read performance! Any two of each vdev may fail! (So up to four drive may die - but only in a specific selection.) (4) Capacity: the only downside of (3) is that the capacity shrinks down to two drives. Recommendation: for VM storage use a mirrored vdev approach.
🌐
Calomel
calomel.org › zfs_raid_speed_capacity.html
ZFS Raidz Performance, Capacity and Integrity Comparison @ Calomel.org
Because parity needs to be calculated raid 5 is slower then raid0, but raid 5 is much safer. RAID 5 requires at least three hard disks in which one(1) full disk of space is used for parity. raid6 or raidz2 distributes parity along with the data and can lose two physical drives instead of just ...
🌐
Reddit
reddit.com › r/zfs › which is better raidz1 with a hotspare, or raidz2?
r/zfs on Reddit: Which is better Raidz1 with a hotspare, or Raidz2?
RaidZ2. You don’t want to be vulnerable during resilvering. RaidZ2 can tolerate 2 drive failures, whereas RaidZ1 can only tolerate 1. Thus, if 1 drive fails on a RaidZ1, your data is vulnerable while your spare resilvers. With RaidZ2 you’ll have to purchase and swap out a replacement drive, but at least your data is not going to be vulnerable during resilvering.