When I started my career in IT as a data center technician / low level systems admin, I saw more than once a disk replacement in a RAID-5 maintenance, while it was rebuilding, another drive gave up the ghost and guess what happens when you lose two drives concurrently in a RAID-5? *makes explosion sound* So yeah, same logic applies here.

Answer from DeputyCartman on reddit.com
🌐
TrueNAS Community
truenas.com › forums › community discussion › community forum › off-topic
What's the stigma behind using RAIDZ1? | TrueNAS Community
July 9, 2023 - Mirrors have way better risk profile as the number of drives go up. Take for example, A 6-drive RAIDZ1 vs 6-drive striped mirrors. In a 6-drive RAIDZ1, you will lose your pool as soon as a second drive goes bad, a 6-drive striped mirrors can suffer up to 3 drive failures so long as the drives ...
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
RaidZ1 performance ZFS on host vs VM | Proxmox Support Forum
February 11, 2024 - If you don't care compression, encryption, snapshot, data integrity then use old file systems. Use #atop to see CPU and disk usage. Maybe it will show something interesting. ... If your VM writes in blocks of 12K then the raidz1 could write 16K in parallel to the drives (assuming ZFS is that ...
Videos
🌐 YouTube ElectronicsWizardry Taking a look at RAIDZ expansion - YouTube
December 2, 2024
🌐
Reddit
reddit.com › r/zfs › why it makes sense to use raidz2 instead of raidz
r/zfs on Reddit: Why it makes sense to use RAIDZ2 instead of RAIDZ

Except no. No no no. It’s the reason why I honestly would never touch raid5, but raidz is fine. There’s the massive difference between raid5/6 vs raidz1/z2. A URE during a raid 5 or 6 rebuild means your pool is gone. A URE during a raidz or z2 rebuild means you lose a file or two. This is different universes in terms of impact. And also why you have backups.

🌐
Server Fault
serverfault.com › questions › 634197 › zfs-is-raidz-1-really-that-bad
truenas - ZFS - Is RAIDZ-1 really that bad? - Server Fault

Before we go into specifics, consider your use case. Are you storing photos, MP3's and DVD rips? If so, you might not care whether you permanently lose a single block from the array. On the other hand, if it's important data, this might be a disaster.

The statement that RAIDZ-1 is "not good enough for real world failures" is because you are likely to have a latent media error on one of your surviving disks when reconstruction time comes. The same logic applies to RAID5.

ZFS mitigates this failure to some extent. If a RAID5 device can't be reconstructed, you are pretty much out of luck; copy your (remaining) data off and rebuild from scratch. With ZFS, on the other hand, it will reconstruct all but the bad chunk, and let the administrator "clear" the errors. You'll lose a file/portion of a file, but you won't lose the entire array. And, of course, ZFS's parity checking means that you will be reliably informed that there's an error. Otherwise, I believe it's possible (although unlikely) that multiple errors will result in a rebuild apparently succeeding, but giving you back bad data.

Since ZFS is a "Rampant Layering Violation," it also knows which areas don't have data on them, and can skip them in the rebuild. So if your array is half empty you're half as likely to have a rebuild error.

You can reduce the likelihood of these kinds of rebuild errors on any RAID level by doing regular "zpool scrubs" or "mdadm checks"of your array. There are similar commands/processes for other RAID's; e.g., LSI/dell PERC raid cards call this "patrol read." These go read everything, which may help the disk drives find failing sectors, and reassign them, before they become permanent. If they are permanent, the RAID system (ZFS/md/raid card/whatever) can rebuild the data from parity.

Even if you use RAIDZ2 or RAID6, regular scrubs are important.

One final note - RAID of any sort is not a substitute for backups - it won't protect you against accidental deletion, ransomware, etc. Although regular ZFS snapshots can be part of a backup strategy.

Answer from Dan Pritts on serverfault.com
🌐
45Drives
45drives.com › community › articles › RAID-and-RAIDZ
RAID and RAIDZ
It is one of the most popular RAID ... it the whole pool could be lost. RAIDZ1 has a benefit over RAID 5 as it has solved the write-hole phenomenon that usually plagues parity and striping RAID levels....
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
3 x 4TB Samsung SSD in ZFS raidz1 => poor performance | Proxmox Support Forum
April 20, 2021 - Is raidz1 that bad? How and where to configure ZFS correctly. Click to expand... Perhaps you should provide some more details of your setup. Then it will be easier for people to help. For instance how are you using the RAIDZ1 pool? Is it used for the Proxmox host, the VM images as data storage ...
🌐
RAIDZ Calculator
raidz-calculator.com › raidz-types-reference.aspx
RAIDZ Types Reference
RAIDZ levels reference covers various aspects and tradeoffs of the different RAIDZ levels.
🌐
Reddit
reddit.com › r/zfs › which is better raidz1 with a hotspare, or raidz2?
r/zfs on Reddit: Which is better Raidz1 with a hotspare, or Raidz2?

RaidZ2. You don’t want to be vulnerable during resilvering. RaidZ2 can tolerate 2 drive failures, whereas RaidZ1 can only tolerate 1. Thus, if 1 drive fails on a RaidZ1, your data is vulnerable while your spare resilvers. With RaidZ2 you’ll have to purchase and swap out a replacement drive, but at least your data is not going to be vulnerable during resilvering.

Find elsewhere
🌐
Reddit
reddit.com › r/zfs › do i want raidz1?
r/zfs on Reddit: Do I want RAIDz1?

Be aware that if another drive fails while doing a rebuild, it is almost certain that all data will be lost. General recommendation is to do RAIDZ2 or greater, but it all depends on whether there is a full backup elsewhere and general risk aversion to data loss.

🌐
Server Fault
serverfault.com › questions › 369331 › is-a-large-raid-z-array-just-as-bad-as-a-large-raid-5-array
zfs - Is a large RAID-Z array just as bad as a large RAID-5 array? - Server Fault

Even given what one of the other answers here laid out, namely that ZFS only works with actual used blocks and not empty space, yes, it is still dangerous to make a large RAIDZ1 vdev. Most pools end up at least 30-50% utilized, many go right up to the recommended maximum of 80% (some go past it, I highly recommend you do not do that at all, for performance reasons), so that ZFS deals only with used blocks is not a huge win. Also, some of the other answers make it sound like a bad read is what causes the problem. This is not so. A bit rot inside a block is not what's going to screw you here, usually, it's another disk just flat out going bad while the resilver from the first disk going bad is still going on that'll kill you.. and on 3 TB disks in a large raidz1 it can take days, even weeks to resilver onto a new disk, so your chance of that happening is not insignificant.

My personal recommendation to customers is to never use RAIDZ1 (RAID5 equivalent) at all with > 750 GB disks, ever, just to avoid a lot of potential unpleasantness. I've been OK with them breaking this rule because of other reasons (the system has a backup somewhere else, the data isn't that important, etc), but usually I do my best to push for RAIDZ2 as a minimum option with large disks.

Also, for a number of reasons, I usually recommend not going more than 8-12 disks in a raidz2 stripe or 11-15 disks in a raidz3 stripe. You should be on the low-end of those ranges with 3 TB disks, and could maybe be OK on the high-end of those ranges on 1 TB disks. That this will help keep you away from the idea that more disks will fail while a resilver is going on is only one of those reasons, but a big one.

If you're looking for some sane rules of thumb (edit 04/10/15 - I wrote these rules with only spinning disks in mind - because they're also logical [why would you do less than 3 disks in a raidz1] they make some sense even for SSD pools but all-SSD pools was not a thing in my head when I wrote these down):

  • Do not use raidz1 at all on > 750 GB disks.
  • Do not use less than 3 or more than 7 disks on a raidz1.
  • If thinking of using 3-disk raidz1 vdevs, seriously consider 3-way mirror vdevs instead.
  • Do not use less than 6 or more than 12 disks on a raidz2.
  • Do not use less than 7 or more than 15 disks on a raidz3.
  • Always remember that unlike traditional RAID arrays where # of disks increase IOPS, in ZFS it is # of VDEVS, so going with shorter stripe vdevs improves pool IOPS potential.
Answer from Nex7 on serverfault.com
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
Avoid IO Delay: RAIDz1 vs RAID5 | Proxmox Support Forum
December 21, 2024 - I am currently running a RAIDz1 with 8 classic 3.5 HDDs and 128GB RAM on my Proxmox server. I only run LXCs. These are located on a separate NVMe. ZFS datasets are integrated in various LXCs via mount points. Depending on the data activity on the ZFS datasets, there are always quite high IO...
🌐
Level1Techs
forum.level1techs.com › hardware hub › storage
Best practice for raidz on differnt sized disks in 2023? - Storage - Level1Techs Forums
May 5, 2023 - If one has three 8TB and four 16TB disks, is it best to divide those in two different vdevs? When googling that’s what most people say, but since zfs is a bit of a moving target I thought it would be better to ask the people who are up to date. Edit: Changed terminology to match what I was ...
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
[TUTORIAL] - FabU: Can I use ZFS RaidZ for my VMs? | Proxmox Support Forum
January 1, 2025 - Assumption: you use at least four identical devices for that. Mirrors, RaidZ, RaidZ2 are possible - theoretically. Technically correct answer: yes, it works. But the right answers is: no, do not do that! The recommendation is very clear: use “striped mirrors”. This results in something ...
🌐
TrueNAS Community
forums.truenas.com › resources
The problem with RAIDZ - Resources - TrueNAS Community Forums
April 16, 2024 - This resource was originally created by user: Jamberry on the TrueNAS Community Forums Archive. Please DM this account or comment in this thread to claim it. The problem with RAIDZ or why you probably won’t get the storage efficiency you think you will get.​ Work in progress, probably contains ...
🌐
Reddit
reddit.com › r/proxmox › boot drive: zfs raid1 or zfs raidz1?
r/Proxmox on Reddit: Boot drive: ZFS Raid1 or ZFS Raidz1?

RAID1 Also called “mirroring”. Data is written identically to all disks. This mode requires at least 2 disks with the same size. The resulting capacity is that of a single disk. RAIDZ-1 A variation on RAID-5, single parity. Requires at least 3 disks. It's going to depend on your hardware, budget, and fault tolerances.

🌐
FreeBSD
forums.freebsd.org › base system › storage
ZFS - Performance: RAIDz1 vs mirroring | The FreeBSD Forums
December 16, 2020 - In my home PC, one of my two HDD that I have in (btrfs) RAID 0 failed. So, I am shopping for the replacement. I am planning to install FreeBSD (zfs) and set up RAIDZ (buying 3 SSDs). Now, I am reading everywhere that mirroring 2 disks (1 vdev per disk) is faster than RAIDz1 with 3 disks. And it...
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox backup server › proxmox backup: installation and configuration
[SOLVED] - Hardware Raid or ZFS | Proxmox Support Forum
January 2, 2025 - Tomorrow I have 4 x DC600M 3.84TB disks arriving. The whole reason I am getting these is due to very slow restores when using HDD. The question now is do I use hardware raid or zfs. I have a hardware raid card with BBU and also LSI HBAs so I can go with either. Also in terms of raid level, I...
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
ZFS usage incorrect for a VM in RAIDZ1 | Proxmox Support Forum
August 9, 2023 - Hi, I'm currently using a zfs pool as RAIDZ1. The ZFS pool has the thin provision argument set. I created a hard disk on this zfs pool, with discard=1. In the VM, df -h tells me there is a 295G disk, with 188G used. In the pool in proxmox, the summary says 294Gb used, but the vm disk (raw...
🌐
Reddit
reddit.com › r/zfs › raidz1 queston
r/zfs on Reddit: RAIDz1 Queston

most people on this reddit are data driven folks and can weigh the real against the hype I think you're taking a narrow view of "data driven" on this it costs me $X to buy an extra hard drive and run raidz2 instead of raidz1 if I experienced data loss due to raidz1, it would take Y hours of my time to recover from backups the value I place on Y hours of my time is significantly larger than $X. so the exact failure probability doesn't matter to me. I happily pay $X extra as an insurance policy.