When I started my career in IT as a data center technician / low level systems admin, I saw more than once a disk replacement in a RAID-5 maintenance, while it was rebuilding, another drive gave up the ghost and guess what happens when you lose two drives concurrently in a RAID-5? *makes explosion sound* So yeah, same logic applies here.

Answer from DeputyCartman on reddit.com
🌐
TrueNAS Community
truenas.com › forums › community discussion › community forum › off-topic
What's the stigma behind using RAIDZ1? | TrueNAS Community
July 9, 2023 - Mirrors have way better risk profile as the number of drives go up. Take for example, A 6-drive RAIDZ1 vs 6-drive striped mirrors. In a 6-drive RAIDZ1, you will lose your pool as soon as a second drive goes bad, a 6-drive striped mirrors can suffer up to 3 drive failures so long as the drives ...
🌐
Reddit
reddit.com › r/zfs › why it makes sense to use raidz2 instead of raidz
r/zfs on Reddit: Why it makes sense to use RAIDZ2 instead of RAIDZ

Except no. No no no. It’s the reason why I honestly would never touch raid5, but raidz is fine. There’s the massive difference between raid5/6 vs raidz1/z2. A URE during a raid 5 or 6 rebuild means your pool is gone. A URE during a raidz or z2 rebuild means you lose a file or two. This is different universes in terms of impact. And also why you have backups.

Videos
🌐 YouTube ElectronicsWizardry Taking a look at RAIDZ expansion - YouTube
December 2, 2024
🌐
45Drives
45drives.com › community › articles › RAID-and-RAIDZ
RAID and RAIDZ
It is one of the most popular RAID ... it the whole pool could be lost. RAIDZ1 has a benefit over RAID 5 as it has solved the write-hole phenomenon that usually plagues parity and striping RAID levels....
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
RaidZ1 performance ZFS on host vs VM | Proxmox Support Forum
February 11, 2024 - If you don't care compression, encryption, snapshot, data integrity then use old file systems. Use #atop to see CPU and disk usage. Maybe it will show something interesting. ... If your VM writes in blocks of 12K then the raidz1 could write 16K in parallel to the drives (assuming ZFS is that ...
🌐
Server Fault
serverfault.com › questions › 634197 › zfs-is-raidz-1-really-that-bad
truenas - ZFS - Is RAIDZ-1 really that bad? - Server Fault

Before we go into specifics, consider your use case. Are you storing photos, MP3's and DVD rips? If so, you might not care whether you permanently lose a single block from the array. On the other hand, if it's important data, this might be a disaster.

The statement that RAIDZ-1 is "not good enough for real world failures" is because you are likely to have a latent media error on one of your surviving disks when reconstruction time comes. The same logic applies to RAID5.

ZFS mitigates this failure to some extent. If a RAID5 device can't be reconstructed, you are pretty much out of luck; copy your (remaining) data off and rebuild from scratch. With ZFS, on the other hand, it will reconstruct all but the bad chunk, and let the administrator "clear" the errors. You'll lose a file/portion of a file, but you won't lose the entire array. And, of course, ZFS's parity checking means that you will be reliably informed that there's an error. Otherwise, I believe it's possible (although unlikely) that multiple errors will result in a rebuild apparently succeeding, but giving you back bad data.

Since ZFS is a "Rampant Layering Violation," it also knows which areas don't have data on them, and can skip them in the rebuild. So if your array is half empty you're half as likely to have a rebuild error.

You can reduce the likelihood of these kinds of rebuild errors on any RAID level by doing regular "zpool scrubs" or "mdadm checks"of your array. There are similar commands/processes for other RAID's; e.g., LSI/dell PERC raid cards call this "patrol read." These go read everything, which may help the disk drives find failing sectors, and reassign them, before they become permanent. If they are permanent, the RAID system (ZFS/md/raid card/whatever) can rebuild the data from parity.

Even if you use RAIDZ2 or RAID6, regular scrubs are important.

One final note - RAID of any sort is not a substitute for backups - it won't protect you against accidental deletion, ransomware, etc. Although regular ZFS snapshots can be part of a backup strategy.

Answer from Dan Pritts on serverfault.com
🌐
RAIDZ Calculator
raidz-calculator.com › raidz-types-reference.aspx
RAIDZ Types Reference
RAIDZ levels reference covers various aspects and tradeoffs of the different RAIDZ levels.
🌐
Reddit
reddit.com › r/zfs › which is better raidz1 with a hotspare, or raidz2?
r/zfs on Reddit: Which is better Raidz1 with a hotspare, or Raidz2?

RaidZ2. You don’t want to be vulnerable during resilvering. RaidZ2 can tolerate 2 drive failures, whereas RaidZ1 can only tolerate 1. Thus, if 1 drive fails on a RaidZ1, your data is vulnerable while your spare resilvers. With RaidZ2 you’ll have to purchase and swap out a replacement drive, but at least your data is not going to be vulnerable during resilvering.

🌐
Reddit
reddit.com › r/zfs › do i want raidz1?
r/zfs on Reddit: Do I want RAIDz1?

Be aware that if another drive fails while doing a rebuild, it is almost certain that all data will be lost. General recommendation is to do RAIDZ2 or greater, but it all depends on whether there is a full backup elsewhere and general risk aversion to data loss.

Find elsewhere
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
3 x 4TB Samsung SSD in ZFS raidz1 => poor performance | Proxmox Support Forum
April 20, 2021 - Is raidz1 that bad? How and where to configure ZFS correctly. Click to expand... Perhaps you should provide some more details of your setup. Then it will be easier for people to help. For instance how are you using the RAIDZ1 pool? Is it used for the Proxmox host, the VM images as data storage ...
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
Avoid IO Delay: RAIDz1 vs RAID5 | Proxmox Support Forum
December 21, 2024 - I am currently running a RAIDz1 with 8 classic 3.5 HDDs and 128GB RAM on my Proxmox server. I only run LXCs. These are located on a separate NVMe. ZFS datasets are integrated in various LXCs via mount points. Depending on the data activity on the ZFS datasets, there are always quite high IO...
🌐
Server Fault
serverfault.com › questions › 369331 › is-a-large-raid-z-array-just-as-bad-as-a-large-raid-5-array
zfs - Is a large RAID-Z array just as bad as a large RAID-5 array? - Server Fault

Even given what one of the other answers here laid out, namely that ZFS only works with actual used blocks and not empty space, yes, it is still dangerous to make a large RAIDZ1 vdev. Most pools end up at least 30-50% utilized, many go right up to the recommended maximum of 80% (some go past it, I highly recommend you do not do that at all, for performance reasons), so that ZFS deals only with used blocks is not a huge win. Also, some of the other answers make it sound like a bad read is what causes the problem. This is not so. A bit rot inside a block is not what's going to screw you here, usually, it's another disk just flat out going bad while the resilver from the first disk going bad is still going on that'll kill you.. and on 3 TB disks in a large raidz1 it can take days, even weeks to resilver onto a new disk, so your chance of that happening is not insignificant.

My personal recommendation to customers is to never use RAIDZ1 (RAID5 equivalent) at all with > 750 GB disks, ever, just to avoid a lot of potential unpleasantness. I've been OK with them breaking this rule because of other reasons (the system has a backup somewhere else, the data isn't that important, etc), but usually I do my best to push for RAIDZ2 as a minimum option with large disks.

Also, for a number of reasons, I usually recommend not going more than 8-12 disks in a raidz2 stripe or 11-15 disks in a raidz3 stripe. You should be on the low-end of those ranges with 3 TB disks, and could maybe be OK on the high-end of those ranges on 1 TB disks. That this will help keep you away from the idea that more disks will fail while a resilver is going on is only one of those reasons, but a big one.

If you're looking for some sane rules of thumb (edit 04/10/15 - I wrote these rules with only spinning disks in mind - because they're also logical [why would you do less than 3 disks in a raidz1] they make some sense even for SSD pools but all-SSD pools was not a thing in my head when I wrote these down):

  • Do not use raidz1 at all on > 750 GB disks.
  • Do not use less than 3 or more than 7 disks on a raidz1.
  • If thinking of using 3-disk raidz1 vdevs, seriously consider 3-way mirror vdevs instead.
  • Do not use less than 6 or more than 12 disks on a raidz2.
  • Do not use less than 7 or more than 15 disks on a raidz3.
  • Always remember that unlike traditional RAID arrays where # of disks increase IOPS, in ZFS it is # of VDEVS, so going with shorter stripe vdevs improves pool IOPS potential.
Answer from Nex7 on serverfault.com
🌐
Reddit
reddit.com › r/zfs › raidz1 queston
r/zfs on Reddit: RAIDz1 Queston

most people on this reddit are data driven folks and can weigh the real against the hype I think you're taking a narrow view of "data driven" on this it costs me $X to buy an extra hard drive and run raidz2 instead of raidz1 if I experienced data loss due to raidz1, it would take Y hours of my time to recover from backups the value I place on Y hours of my time is significantly larger than $X. so the exact failure probability doesn't matter to me. I happily pay $X extra as an insurance policy.

🌐
Reddit
reddit.com › r/proxmox › reasons not to use 2 nvme ssds in raidz1 for both proxmox and vms/app data in a homelab environment
r/Proxmox on Reddit: Reasons not to use 2 NVMe SSDs in RAIDZ1 for both Proxmox and VMs/app data in a homelab environment

In short: no. I have been running the exact same setup for years without issues. Two things to consider: Back up your containers so they can be restored if your system disks fail Back up your systems config files for easier recovery should both disks fail

🌐
TrueNAS Community
truenas.com › forums › truenas core › general discussion
The problem with RAIDZ | TrueNAS Community
December 13, 2023 - The problem with RAIDZ or why you probably won't get the storage efficiency you think you will get. As a ZFS rookie, I struggled a fair bit to find out what settings I should use for my Proxmox hypervisor. To learn more about ZFS and help other rookies, I wrote down this wall of text. Although...
🌐
FreeBSD
forums.freebsd.org › base system › storage
ZFS - Performance: RAIDz1 vs mirroring | The FreeBSD Forums
December 16, 2020 - In my home PC, one of my two HDD that I have in (btrfs) RAID 0 failed. So, I am shopping for the replacement. I am planning to install FreeBSD (zfs) and set up RAIDZ (buying 3 SSDs). Now, I am reading everywhere that mirroring 2 disks (1 vdev per disk) is faster than RAIDz1 with 3 disks. And it...
🌐
TrueNAS Community
forums.truenas.com › resources
The problem with RAIDZ - Resources - TrueNAS Community Forums
March 4, 2024 - This resource was originally created by user: Jamberry on the TrueNAS Community Forums Archive. Please DM this account or comment in this thread to claim it. The problem with RAIDZ or why you probably won’t get the storage efficiency you think you will get.​ Work in progress, probably contains ...
🌐
Calomel
calomel.org › zfs_raid_speed_capacity.html
ZFS Raidz Performance, Capacity and Integrity Comparison @ Calomel.org
ZFS Raid Speed Capacity and Performance ... TB, w=106MB/s , rw=49MB/s , r=589MB/s 3x 4TB, stripe (raid0), 11.3 TB, w=392MB/s , rw=86MB/s , r=474MB/s 3x 4TB, raidz1 (raid5), 7.5 TB, w=225MB/s , rw=56MB/s , r=619MB/s 4x 4TB, 2 striped mirrors, 7.5 TB, w=226MB/s , rw=53MB/s , r=644MB/s ...
🌐
JRS Systems
jrs-s.net › 2015 › 02 › 06 › zfs-you-should-use-mirror-vdevs-not-raidz
ZFS: You should use mirror vdevs, not RAIDZ. – JRS Systems: the blog
The only disk more heavily loaded ... sound bad, but remember that it’s no more heavily loaded than it would’ve been as a RAIDZ member. Each block resilvered on a RAIDZ vdev requires a block to be read from each surviving RAIDZ member; each block written to a resilvering mirror only requires one block to be read from a surviving vdev member. For a six-disk RAIDZ1 vs a six disk ...
🌐
TrueNAS Community
truenas.com › forums › archives › freenas (legacy software releases) › freenas help & support › hardware
Should I RAID-Z1 or RAID-Z2? | TrueNAS Community
June 2, 2011 - I'm going to be building myself an NAS in a month or two and I'm trying to decide whether I want RAID-Z1 or RAID-Z2. Now, if this was an enterprise-level application I'd just go with Z2 and be done with it but this is for my own, personal use. The zpool is going to be used for file storage...