You can break the mirror and remove one of the drives, then use that and the third drive to create a raidz1 in a degraded state (one drive missing). Now copy the files from the degraded mirror to the degraded raidz1, wipe the mirror, and add it to the raidz1. Answer from mrxsdcuqr7x284k6 on reddit.com
🌐
GitHub
github.com › openzfs › zfs › discussions › 15232
RAIDZ Expansion feature discussion · openzfs/zfs · Discussion #15232
Raidz1 with larger drives has a tendency to break during resilver. Beta Was this translation helpful? Give feedback. ... There was an error while loading. Please reload this page. Something went wrong. There was an error while loading. Please reload this page. ... With my N00b level of understanding you did a great job of explaining. With that said I think I lean towards Backup-copy My 2 disk mirrored Vdev, then create a brand new with my now 3 disks and then copy back the data into the new vdev?
Author   openzfs
🌐
FreeBSD Foundation
freebsdfoundation.org › home › blog › openzfs raid-z expansion: a new era in storage flexibility
OpenZFS RAID-Z Expansion: A New Era in Storage Flexibility | FreeBSD Foundation
February 24, 2025 - Better Resource Utilization: Previously, expanding RAID-Z required adding an entirely new vdev, often leading to inefficient use of older pools. Now, storage scales dynamically. Minimal Downtime: Expansion occurs while the system remains operational. This demonstration of RAID-Z expansion uses a recent FreeBSD 15-CURRENT snapshot: bsd_FreeBSD-15.0-CURRENT-amd64-zfs-20250116-054c5ddf587a-274800.raw from 2025/01/16, which includes all the latest OpenZFS 2.3 goodies.
🌐
Louwrentius
louwrentius.com › zfs-raidz-expansion-is-awesome-but-has-a-small-caveat.html
ZFS RAIDZ expansion is awesome but has a small caveat
You can't just add a single disk to the existing 3-disk RAIDZ vdev to create a 4-disk RAIDZ vdev because vdevs can't be expanded. The impact of this limitation is that you have to buy all storage upfront even if you don't need the space for years to come. Otherwise, by expanding with additional ...
🌐
Reddit
reddit.com › r/zfs › zfs multiple vdev pool expansion
r/zfs on Reddit: ZFS multiple vdev pool expansion
April 4, 2025 -

Hi guys! I almost finished my home NAS and now choosing the best topology for the main data pool. For now I have 4 HDDs, 10 Tb each. For the moment raidz1 with a single vdev seems the best choice but considering the possibility of future storage expansion and the ability to expand the pool I also consider a 2 vdev raidz1 configuration. If I understand correctly, this gives more iops/write speed. So my questions on the matter are:

  1. If now I build a raidz1 with 2 vdevs 2 disks wide (getting around 17.5 TiB of capacity) and somewhere in the future I buy 2 more drives of the same capacity, will I be able to expand each vdev to width of 3 getting about 36 TiB?

  2. If the answer to the first question is “Yes, my dude”, will this work with adding only one drive to one of the vdevs in the pool so one of them is 3 disks wide and another one is 2? If not, is there another topology that allows something like that? Stripe of vdevs?

I used zfs for some time but only as a simple raidz1, so not much practical knowledge was accumulated. The host system is truenas, if this is important.

🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
[SOLVED] - Adding more disks to an already exist ZFS RaidZ1 | Proxmox Support Forum
May 4, 2021 - Click to expand... Noted! ... Replace 1 old disk with a new disk which is empty. Use the old disk with the 2 new remain Disks. Click to expand... Hi again, NOW: raidz1 with this HDDs: N1, N2, N3 You have 3 new HDDs: X1,X2,X3 Before you start, ...
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox backup server › proxmox backup: installation and configuration
[SOLVED] - ZFS raidz1: Expanding not possible? | Proxmox Support Forum
May 15, 2023 - Usually you would either: 1.) add another vdev (best case would be an identical one but I guess you don't want to buy 8 more U.2 disks...) and stripe it, so you get no downtime 2.) migrate all data to another storage, destroy that pool, create ...
🌐
DiskInternals
diskinternals.com › home › raid recovery › zfs raid expansion: how to expand raidz and zfs pools safely
ZFS RAID Expansion: How to Expand RAIDZ and ZFS Pools Safely | DiskInternals
November 11, 2025 - However, in ZFS, the RAIDZ structure is more fixed, and expanding it requires either adding a new vdev (a collection of disks) to the pool or replacing each disk with larger ones, which can be time-consuming and limits flexibility.
Find elsewhere
🌐
Reddit
reddit.com › r/zfs › pooling individual zfs drives/vdevs later into raidz pool?
r/zfs on Reddit: Pooling individual ZFS drives/vdevs later into RAIDZ pool?
November 17, 2023 -

Scenario...(I assume 3+ drives is going to be required for this scenario, not 2)...

I have 3 individual ZFS drives/partitions, which would be three vdev's.

Is it possible to pool the three drives/vdev's into a single pool without destroying the data?

I'm thinking of beginning my ZFS journey one step at a time, and wanted to start with individual ZFS drives/vdevs first. Then later, as I know more, plan more, buy more drives, then create the ZFS pool for full RAIDZ protection.

If not capable now, will the upcoming RAIDZ expansion have the capability of merging drives into a new pool, without data loss?

Or is ZFS storage planning set in stone? And I better know what I want, because there's no way to change the storage?

🌐
TrueNAS Community
truenas.com › forums › archives › freenas (legacy software releases) › freenas help & support › storage
Expanding RaidZ1 with single disk | TrueNAS Community
August 10, 2018 - Click to expand... With traditional mirrors (2 disks per vdev) that's correct. That's why I said that a ZFS mirror isn't limited to 2 disks. Depending on your risk tolerance one might look at 3 or 4 way mirrors.
🌐
TrueNAS Community
forums.truenas.com › truenas general
Expanding ZFS Storage with More Drives - TrueNAS General - TrueNAS Community Forums
February 22, 2025 - I know there are plenty of threads about this subject, but I am curious about one detail. I am running ElectricEel-24.10.2 and I have updated the ZFS storage pool to the latest version. By all accounts I believe my system can support expanding drives based on what I read in the release notes ...
🌐
Reddit
reddit.com › r/proxmox › zfs raid extension
r/Proxmox on Reddit: ZFS Raid extension
December 19, 2024 -

So I am currently building my Ugreen nasync dxp4800(4 possible bays) and while I got my ssds ready, I still need some HDDs. However my setup was already quite expensive and I want to start with 2 or 3 HDDs and add the rest later. How easy is it to add a drive later and add it to an existing pool and is this already possible in proxmox? Which raid level would you recommend?

🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
May I successively add a new hdd to an existing pool? | Proxmox Support Forum
June 13, 2024 - If you have a 3-disk raidz1 pool, which is not recommended anymore if your disks are over 2TB, to expand it you would need to add another 3-disk vdev. RAIDZ2 is the standard, since the odds of a 2nd disk failing during a resilver are not zero.
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
ZFS VDEV Expansion | Proxmox Support Forum
September 23, 2022 - Yes, there isn't a "extension" of zfs vdev , which you get with regular raid by just adding disks and growing. Only way to grow now,is to replace current disks with bigger,or add a new vdev.
🌐
FreeBSD
forums.freebsd.org › base system › storage
ZFS - vdev Expansion in zfs 2.3.0 | The FreeBSD Forums
December 4, 2024 - Adding a single drive to an existing vdev can (should) be done by adding a drive of the same size and will roughly expand pool capacity by the size of the drive once zfs housekeeping is complete.
Top answer
1 of 3
22

There are basically two ways of growing a ZFS pool.

Add more vdevs

This is what user1133275 is suggesting in their answer. It's done with zpool add (which has basically the same syntax as zpool create does for specifying storage), and it works well for what it does.

ZFS won't rebalance your stored data automatically, but it will start to write any new data to the new vdevs until the new vdev has about the same usage as the existing one(s).

Once you've added a vdev to a pool, you basically cannot remove it without recreating the pool from scratch.

All vdevs in a pool need to be above their respective redundancy thresholds for the pool to be importable. In other words, every vdev needs to be at least DEGRADED for the pool to function.

Replace disks with larger ones

This is what you're discussing in your question. It's the normal way of growing a ZFS pool when you have a pool layout that you are happy with.

To replace a device with a new one, the new device needs to be at least as large as the old one.

Operationally, you'd hook up the new disk along with the old, and then zpool replace the old disk with the new one. (This creates a temporary replacing device which becomes a parent to the old and new disk; when the resilver completes, the replacing device is removed from the device tree and it looks like the new device was there all along.) Once the resilver completes, the old disk can be removed from the system.

Once all disks in a vdev are replaced by larger ones, you can expand the pool by running zpool online -e or by having the autoexpand property set to on (though I wouldn't really recommend the latter; pool expansion should be a conscious decision).

So which way is better?

That basically depends on your pool. As mentioned, the downside to having multiple vdevs is that they all need to be functional, so by adding vdevs you are actually, in a sense, reducing your safety margin. The upside, though, is that it's much easier to grow the pool piecemeal. Replacing devices in-place is basically the opposite; you don't need to keep as many vdevs functioning, but it isn't as easy to grow a pool piecemeal.

For me, frankly, assuming for a second that you're using rotational hard disks (since this seems like bulk storage), 20 TB is still well within reason for a single vdev pool. My suggestion in your situation would be to get six drives of the 8 TB variety, and to set those up in a single raidz2 vdev. Doing so gives you a net storage capacity of around 32 TB, thus leaving you with an initial about 35% free, and the ability to lose any two drives before any of your data is at significant risk. You could also consider running eight 6 TB drives for a net storage capacity of around 36 TB and starting out at 45% free. (I'd consider 6-8 drives to be slightly on the large end for raidz1, but fine for raidz2.) Then plan to replace those drives either on a 4-5 year schedule (due to wear) or whenever the pool goes above about 80% full (because ZFS is much, much happier when it has good headroom). If your figures are accurate, you should be replacing those drives due to wear well before your pool starts getting full, while still allowing for a reasonable amount of unexpected growth in storage needs. When you replace the drives, you can decide whether you're happy with the pool size you've got based on then-current usage, or if you want to get larger drives and expand the pool.

2 of 3
15

In addition to the options in the excellent answer above, there will soon be another option.

The OpenZFS project (ZFS on Linux, ZFS on FreeBSD) is working on a feature to allow the addition of new physical devices to existing RAID-Z vdevs. This will allow, for instance, the expansion of a 6-drive RAID-Z2 vdev into a 7-drive RAID-Z2 vdev. This will happen while the filesystem is online, and will be repeatable once the expansion is complete (e.g., 7-drive vdev → 8-drive vdev).

  • As of September 2020, this feature is still in development: https://github.com/openzfs/zfs/pull/8853

  • As of October 2022, this feature is still in development: https://github.com/openzfs/zfs/pull/12225

  • As of July 2023, this feature is still in development: https://github.com/openzfs/zfs/pull/15022

  • As of November 2023, this feature was merged in main and is scheduled for release in OpenZFS 2.3: https://github.com/openzfs/zfs/pull/15022

🌐
Ars Technica
arstechnica.com › gadgets › 2021 › 06 › raidz-expansion-code-lands-in-openzfs-master
ZFS fans, rejoice—RAIDz expansion will be a thing very soon - Ars Technica
June 15, 2021 - OpenZFS founding developer Matthew ... the size of a single RAIDz vdev. For example, you can use the new feature to turn a three-disk RAIDz1 into a four, five, or six RAIDz1....
🌐
Reddit
reddit.com › r/zfs › is there a way to expand a zfs pool after creation?
r/zfs on Reddit: Is there a way to expand a zfs pool after creation?
March 10, 2021 - If he has a pool of 3 single drive vdevs... don't bother expanding the pool, it will be empty soon anyway... More replies ... No. You can add a new vdev, but it also needs to be redundant. Ideally, if you start out with a three disk RAIDz1, ...