You can't extend the existing raidz1 vdev by adding another disk but you can add another vdev to the pool to increase the pool's capacity. You'll need more than one additional disk if you want to retain redundancy. For example, you could use two disks to add a mirror vdev or 3+ disks for another raidz1 vdev.

Answer from Mike Fitzpatrick on Stack Exchange
🌐
Louwrentius
louwrentius.com › zfs-raidz-expansion-is-awesome-but-has-a-small-caveat.html
ZFS RAIDZ expansion is awesome but has a small caveat
You can't just add a single disk to the existing 3-disk RAIDZ vdev to create a 4-disk RAIDZ vdev because vdevs can't be expanded. The impact of this limitation is that you have to buy all storage upfront even if you don't need the space for years to come. Otherwise, by expanding with additional ...
🌐
Ars Technica
arstechnica.com › gadgets › 2021 › 06 › raidz-expansion-code-lands-in-openzfs-master
ZFS fans, rejoice—RAIDz expansion will be a thing very soon - Ars Technica
June 15, 2021 - OpenZFS founding developer Matthew ... the size of a single RAIDz vdev. For example, you can use the new feature to turn a three-disk RAIDz1 into a four, five, or six RAIDz1....
🌐
GitHub
github.com › openzfs › zfs › discussions › 15232
RAIDZ Expansion feature discussion · openzfs/zfs · Discussion #15232
Raidz1 with larger drives has a tendency to break during resilver. Beta Was this translation helpful? Give feedback. ... There was an error while loading. Please reload this page. Something went wrong. There was an error while loading. Please reload this page. ... With my N00b level of understanding you did a great job of explaining. With that said I think I lean towards Backup-copy My 2 disk mirrored Vdev, then create a brand new with my now 3 disks and then copy back the data into the new vdev?
Author   openzfs
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
[SOLVED] - Adding more disks to an already exist ZFS RaidZ1 | Proxmox Support Forum
May 4, 2021 - Click to expand... Noted! ... Replace 1 old disk with a new disk which is empty. Use the old disk with the 2 new remain Disks. Click to expand... Hi again, NOW: raidz1 with this HDDs: N1, N2, N3 You have 3 new HDDs: X1,X2,X3 Before you start, ...
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
May I successively add a new hdd to an existing pool? | Proxmox Support Forum
June 13, 2024 - If you have a 3-disk raidz1 pool, which is not recommended anymore if your disks are over 2TB, to expand it you would need to add another 3-disk vdev. RAIDZ2 is the standard, since the odds of a 2nd disk failing during a resilver are not zero.
🌐
Reddit
reddit.com › r/zfs › zfs multiple vdev pool expansion
r/zfs on Reddit: ZFS multiple vdev pool expansion
April 4, 2025 -

Hi guys! I almost finished my home NAS and now choosing the best topology for the main data pool. For now I have 4 HDDs, 10 Tb each. For the moment raidz1 with a single vdev seems the best choice but considering the possibility of future storage expansion and the ability to expand the pool I also consider a 2 vdev raidz1 configuration. If I understand correctly, this gives more iops/write speed. So my questions on the matter are:

  1. If now I build a raidz1 with 2 vdevs 2 disks wide (getting around 17.5 TiB of capacity) and somewhere in the future I buy 2 more drives of the same capacity, will I be able to expand each vdev to width of 3 getting about 36 TiB?

  2. If the answer to the first question is “Yes, my dude”, will this work with adding only one drive to one of the vdevs in the pool so one of them is 3 disks wide and another one is 2? If not, is there another topology that allows something like that? Stripe of vdevs?

I used zfs for some time but only as a simple raidz1, so not much practical knowledge was accumulated. The host system is truenas, if this is important.

Top answer
1 of 3
22

There are basically two ways of growing a ZFS pool.

Add more vdevs

This is what user1133275 is suggesting in their answer. It's done with zpool add (which has basically the same syntax as zpool create does for specifying storage), and it works well for what it does.

ZFS won't rebalance your stored data automatically, but it will start to write any new data to the new vdevs until the new vdev has about the same usage as the existing one(s).

Once you've added a vdev to a pool, you basically cannot remove it without recreating the pool from scratch.

All vdevs in a pool need to be above their respective redundancy thresholds for the pool to be importable. In other words, every vdev needs to be at least DEGRADED for the pool to function.

Replace disks with larger ones

This is what you're discussing in your question. It's the normal way of growing a ZFS pool when you have a pool layout that you are happy with.

To replace a device with a new one, the new device needs to be at least as large as the old one.

Operationally, you'd hook up the new disk along with the old, and then zpool replace the old disk with the new one. (This creates a temporary replacing device which becomes a parent to the old and new disk; when the resilver completes, the replacing device is removed from the device tree and it looks like the new device was there all along.) Once the resilver completes, the old disk can be removed from the system.

Once all disks in a vdev are replaced by larger ones, you can expand the pool by running zpool online -e or by having the autoexpand property set to on (though I wouldn't really recommend the latter; pool expansion should be a conscious decision).

So which way is better?

That basically depends on your pool. As mentioned, the downside to having multiple vdevs is that they all need to be functional, so by adding vdevs you are actually, in a sense, reducing your safety margin. The upside, though, is that it's much easier to grow the pool piecemeal. Replacing devices in-place is basically the opposite; you don't need to keep as many vdevs functioning, but it isn't as easy to grow a pool piecemeal.

For me, frankly, assuming for a second that you're using rotational hard disks (since this seems like bulk storage), 20 TB is still well within reason for a single vdev pool. My suggestion in your situation would be to get six drives of the 8 TB variety, and to set those up in a single raidz2 vdev. Doing so gives you a net storage capacity of around 32 TB, thus leaving you with an initial about 35% free, and the ability to lose any two drives before any of your data is at significant risk. You could also consider running eight 6 TB drives for a net storage capacity of around 36 TB and starting out at 45% free. (I'd consider 6-8 drives to be slightly on the large end for raidz1, but fine for raidz2.) Then plan to replace those drives either on a 4-5 year schedule (due to wear) or whenever the pool goes above about 80% full (because ZFS is much, much happier when it has good headroom). If your figures are accurate, you should be replacing those drives due to wear well before your pool starts getting full, while still allowing for a reasonable amount of unexpected growth in storage needs. When you replace the drives, you can decide whether you're happy with the pool size you've got based on then-current usage, or if you want to get larger drives and expand the pool.

2 of 3
15

In addition to the options in the excellent answer above, there will soon be another option.

The OpenZFS project (ZFS on Linux, ZFS on FreeBSD) is working on a feature to allow the addition of new physical devices to existing RAID-Z vdevs. This will allow, for instance, the expansion of a 6-drive RAID-Z2 vdev into a 7-drive RAID-Z2 vdev. This will happen while the filesystem is online, and will be repeatable once the expansion is complete (e.g., 7-drive vdev → 8-drive vdev).

  • As of September 2020, this feature is still in development: https://github.com/openzfs/zfs/pull/8853

  • As of October 2022, this feature is still in development: https://github.com/openzfs/zfs/pull/12225

  • As of July 2023, this feature is still in development: https://github.com/openzfs/zfs/pull/15022

  • As of November 2023, this feature was merged in main and is scheduled for release in OpenZFS 2.3: https://github.com/openzfs/zfs/pull/15022

Find elsewhere
🌐
Reddit
reddit.com › r/zfs › zfs questions - multiple raidz1 vdev in a single pool reliability
r/zfs on Reddit: ZFS questions - multiple raidz1 vdev in a single pool reliability
April 19, 2020 -

Hello folks,

I'm very new to ZFS, so I'd like to apologize in advance if any of my questions have been answered already, but I couldn't find any post related to my questions.

I've built a zpool with a single raidz1 vdev containing 4x4TB disks. I'm thinking of what'll happen once I need to increase the size of this zpool or if I want to add more disks in the future for better reliability.

Given my setup, if I want to increase the size of my pool as well as having another disk for parity and be able to loose two disks on my pool, would adding another raidz1 vdev of 4x4TB be a good option? Would this give me two parity disks? can two disks on the same vdev fail in this scenario?

If anyone has better suggestions that would be very much appreciated. The end goal is once I need more space and can afford buying more disks, I want to be able to have two parity disks and more storage capacity.

Thank you!

Top answer
1 of 5
7
if I want to increase the size of my pool as well as having another disk for parity and be able to loose two disks on my pool, would adding another raidz1 vdev of 4x4TB be a good option? Would this give me two parity disks? can two disks on the same vdev fail in this scenario? Two 4-wide raidz1 isn't completely bonkers, but it's a bit out on the ragged edge imo. It's possible to survive a second disk failure, but not anything you'd want to bank on. Ignoring, for the moment, the fact that all four of your original disks will be older and all four in the second vdev will be newer, the odds of surviving a second disk failure after one has failed are 4/7--roughly 57%. And the relative ages of the disks are, frankly, a lot to ignore. I don't typically recommend RAIDz1 any wider than three disks. If you want higher performance, with eight disks your top choices are a pool of four 2-wide mirrors (highest performance, fastest rebuild, 50% storage efficiency) or a pool of two 4-wide RAIDz2 vdevs (moderate performance, a bit less than 50% storage efficiency, extreme survivability). If you want to max out storage capacity and survivability, but don't need a performance boost from what you're used to now with the four disk raidz1, you'll want a single 8-wide raidz2--storage efficiency a bit less than 3/4, guaranteed two disk fault tolerance (assuming no previously corrupt blocks), not uncovered (ie no remaining parity/redundancy for corruption repair) until the second failure.
2 of 5
3
Mathematically chances are worse with two vdevs of raidz1 than a single raidz2 vdev. The difference is not worth choosing one over the other though. If you have data you cannot lose you keep multiple backups of it as we all know (but fail at now and then). Since you gain ease of capacity expansion with multiple vdevs I would just do that. That is if you absolutely need to have this be part of the same pool. Otherwise you can create a new pool for the second set of 4x4TB drives. I decided I didn't care about having everything in one huge pool and I just manually spread the data over the pools.
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox backup server › proxmox backup: installation and configuration
[SOLVED] - ZFS raidz1: Expanding not possible? | Proxmox Support Forum
May 15, 2023 - Usually you would either: 1.) add another vdev (best case would be an identical one but I guess you don't want to buy 8 more U.2 disks...) and stripe it, so you get no downtime 2.) migrate all data to another storage, destroy that pool, create ...
🌐
Reddit
reddit.com › r/zfs › pooling individual zfs drives/vdevs later into raidz pool?
r/zfs on Reddit: Pooling individual ZFS drives/vdevs later into RAIDZ pool?
January 5, 2024 -

Scenario...(I assume 3+ drives is going to be required for this scenario, not 2)...

I have 3 individual ZFS drives/partitions, which would be three vdev's.

Is it possible to pool the three drives/vdev's into a single pool without destroying the data?

I'm thinking of beginning my ZFS journey one step at a time, and wanted to start with individual ZFS drives/vdevs first. Then later, as I know more, plan more, buy more drives, then create the ZFS pool for full RAIDZ protection.

If not capable now, will the upcoming RAIDZ expansion have the capability of merging drives into a new pool, without data loss?

Or is ZFS storage planning set in stone? And I better know what I want, because there's no way to change the storage?

🌐
DiskInternals
diskinternals.com › home › raid recovery › zfs raid with different size drives – setup, limitations, and best practices
ZFS RAID with Different Size Drives – Setup, Limitations, and Best Practices | DiskInternals
February 13, 2025 - One approach to expanding your ZFS storage is to replace smaller drives with larger ones. This process involves: 1. Drive Replacement: Gradually replace each smaller drive in a vdev with a larger drive.
🌐
Reddit
reddit.com › r/zfs › is there a way to expand a zfs pool after creation?
r/zfs on Reddit: Is there a way to expand a zfs pool after creation?
April 27, 2021 - If he has a pool of 3 single drive vdevs... don't bother expanding the pool, it will be empty soon anyway... More replies ... No. You can add a new vdev, but it also needs to be redundant. Ideally, if you start out with a three disk RAIDz1, ...
🌐
DiskInternals
diskinternals.com › home › raid recovery › zfs raid expansion: how to expand raidz and zfs pools safely
ZFS RAID Expansion: How to Expand RAIDZ and ZFS Pools Safely | DiskInternals
February 20, 2025 - However, in ZFS, the RAIDZ structure is more fixed, and expanding it requires either adding a new vdev (a collection of disks) to the pool or replacing each disk with larger ones, which can be time-consuming and limits flexibility.
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
ZFS VDEV Expansion | Proxmox Support Forum
September 23, 2022 - Yes, there isn't a "extension" of zfs vdev , which you get with regular raid by just adding disks and growing. Only way to grow now,is to replace current disks with bigger,or add a new vdev.
🌐
TrueNAS Community
truenas.com › forums › archives › freenas (legacy software releases) › freenas help & support › storage
Expanding RaidZ1 with single disk | TrueNAS Community
May 8, 2018 - Click to expand... With traditional mirrors (2 disks per vdev) that's correct. That's why I said that a ZFS mirror isn't limited to 2 disks. Depending on your risk tolerance one might look at 3 or 4 way mirrors.
🌐
Reddit
reddit.com › r/proxmox › zfs raid extension
r/Proxmox on Reddit: ZFS Raid extension
January 27, 2025 -

So I am currently building my Ugreen nasync dxp4800(4 possible bays) and while I got my ssds ready, I still need some HDDs. However my setup was already quite expensive and I want to start with 2 or 3 HDDs and add the rest later. How easy is it to add a drive later and add it to an existing pool and is this already possible in proxmox? Which raid level would you recommend?