First arg to zpool remove is the pool name, not the vdev type. Then if it’s not recognizing the name of the cache disk because it isn’t present, you can try using the GUID listed by zpool status -g.

Answer from quartsize on discourse.practicalzfs.com
🌐
Level1Techs
forum.level1techs.com › wikis & how-to guides
ZFS Guide for starters and advanced users. Concepts, pool config, tuning, troubleshooting - Wikis & How-to Guides - Level1Techs Forums
April 16, 2023 - With Wendell featuring ZFS and homeserver in quite a lot of videos on L1Techs, we as a community have regular forum threads dealing with homeserver and storage in all kinds of ways. The intention of this thread is to give an overview on what ZFS is, how to use it, why use it at all and how ...
🌐
Klara Systems
klarasystems.com › home › choosing the right zfs pool layout
Choosing the Right ZFS Pool Layout - Klara Systems
June 13, 2025 - Setting up a ZFS pool involves a number of permanent decisions that will affect the performance, cost, and reliability of your data storage systems, so you really want to understand all the options at your disposal for making the right choices from the beginning.
🌐
Oracle
docs.oracle.com › cd › E23823_01 › html › 819-5461 › zfspools-4.html
Recommended Storage Pool Practices - Oracle Solaris ZFS Administration Guide
Use whole disks to enable disk ... Creating pools on slices adds complexity to disk management and recovery. Use ZFS redundancy so that ZFS can repair data inconsistencies. ... Do not expand LUNs from extremely varied sizes, such as 128 MB to 2 TB, to keep optimal metaslab ...
🌐
High Availability
high-availability.com › docs › ZFS-Tuning-Guide
ZFS Tuning Recommendations | High Availability
The metadata is stored along with the actual data in the pools vdevs, meaning that whenever metadata is required ZFS must first read the metadata from those vdevs, followed by another read to get the actual data; also note that ZFS will have to scan multiple vdevs when searching/reading the ...
🌐
Readthedocs
openzfs.readthedocs.io › en › latest › performance-tuning.html
Performance tuning — openzfs latest documentation
In addition, a dedicated cache ... add poolname cache devicename. The cache device is managed by the Level 2 ARC (L2ARC) which scans entries that are next to be evicted and writes them to the cache device. The data stored in ARC and L2ARC can be controlled via the primarycache and secondarycache ZFS properties ...
🌐
Practical ZFS
discourse.practicalzfs.com › openzfs
ZFS pool setup and disk layout - OpenZFS - Practical ZFS

First arg to zpool remove is the pool name, not the vdev type. Then if it’s not recognizing the name of the cache disk because it isn’t present, you can try using the GUID listed by zpool status -g.

Answer from quartsize on discourse.practicalzfs.com
🌐
Medium
simeontrieu.medium.com › a-gentle-introduction-to-zfs-part-4-optimizing-and-maintaining-zfs-storage-pools-6d2def485cab
A Gentle Introduction to ZFS, Part 4: Optimizing and Maintaining a ZFS Storage Pool for NAS | by Simeon Trieu | Medium
December 26, 2021 - In the previous articles, we’ve built the ZFS kernel module to access the new dRAID vdev, setup and mounted our ZFS storage pool, then shared it with the network through NFS. Now, we will optimize
🌐
OpenZFS
openzfs.github.io › openzfs-docs › Performance and Tuning › Workload Tuning.html
Workload Tuning — OpenZFS documentation
Since many devices misreport their sector sizes and ZFS relies on the block device layer for this information, each platform has developed different workarounds. The platform-specific methods are as follows: ... -o ashift= is convenient, but it is flawed in that the creation of pools containing top level vdevs that have multiple optimal ...
Find elsewhere
🌐
TrueNAS
truenas.com › resources › fundamentals
Picking a ZFS Pool Layout to Optimize Performance | TrueNAS Community
October 3, 2018 - Part 1 of the blog post is here, covering basics, striped pools, and mirrored vdevs: https://www.ixsystems.com/blog/zfs-pool-performance-1/ Part 2 covers RAID-Z and some example workload scenarios...
🌐
Super User
superuser.com › questions › 1156592 › optimal-zfs-pool-configuration-for-a-home-nas
linux - Optimal ZFS Pool configuration for a home NAS - Super User

Most of your points are correct, so I just focus on the rest:

  1. Using an SLOG device for the ZIL does only help you with small synced writes, so it is pretty mandatory if you want to store virtual machines on it and pretty useless in most other home use cases, especially backups and streaming media. As you can always later on add it and remove it, you should start without and then add only if necessary.

  2. L2ARC can increase your read performance, but it is slower than RAM, needs extra RAM and only helps if the same data is read. Again, bad for streaming a whole movie or music, but good if you host a website that is heavily accessed or have hundreds of users accessing file shares. Rule of thumb is: first max out your RAM (depending on your board 32, 64, 128 or 256 GB most likely), then think about L2ARC.

  3. ZIL and L2ARC on the same device is usually not a good idea, as their needs are directly opposed:

    • ZIL is written to constantly for small random synced IO (large and sequential IO bypasses it, async IO of any kind does not use it at all), which means you want an SSD with very low write latency (Intel is the only vendor I've found that specifies this characteristic even for the cheaper consumer SSDs), acceptable write IOPS (nearly all SSDs are sufficient here), and high amount of TBW so your SSD does not die each year from exhaustion. For size, < 10 GB is usually enough for small systems. Mirroring is preferred to prevent data loss when power and SSD fail at the same time.
    • L2ARC on the other hand needs to be several times larger (> 64 GB is common, depending on RAM), is seldomly written to but read often, so you want high read IOPS, acceptable read latency and don't care about TBW that much. Mirroring is a waste of money in most cases, as it is only a cache device and can be lost and recreated without problems.
  4. A single root pool is of course possible, but you save yourself some headaches if you mirror it. As it usually is hit not that much, two slow disks or even USB devices (each mainboard has at least two USB ports as headers internally) are perfectly fine for home use and you gain another usable disk slot. Especially when running without UPS two rpool devices really give you peace of mind.

  5. Your pool size is correct, but it may be an option to go for 12 disks with either 2x Z2 (6 disks each) or 1x Z3 (12 disks each). As a rule of thumb, when using Z1/2/3, you should populate all your available disk slots at first, because while upgrading disk size one by one is trivial, adding more disks later on is impossible.

  6. I don't know about Linux (should work fine), but have you looked at other illumos-based systems? OmniOS is small, simple and stable and can be customized to your needs (it also includes KVM and LX-branded zones). SmartOS is similar, but focused heavily on zones (containers), so you can run all your services independent from one another and can even run Linux guests in those zones for the few services that are not available on Solaris. There are also Delphix and NexentaStore Community Edition, but I have not tested them.


My personal suggestion:

  • Use whatever operating system your are comfortable with (if you like the stability of Solaris, try OmniOS or if you want virtualization, try SmartOS)
  • Use mirrored rpools on USB disks (USB3 HDDs or USB3 sticks with SLC memory) to expose more slots for disks
  • Use 6 ports from the mainboards and 6 ports from the HBA card so you can lose a controller and your system keeps running
  • 2 free ports can be used in the future for SLOG or L2ARC devices depending on your needs
  • Layout (12 is a very nice number, 16 would be next best because most controllers are 4 or 8 port):
    • If you need maximum performance: 6x2 mirrors, each on both controllers
    • If you need maximum resilience: 4x3 mirrors or 1x RAID Z3(12) or 2x RAID Z2(6)
    • If you need maximum space: 1x RAID Z2(12)
  • Maximize RAM first, then anything else

Regarding your follow-up-questions from comments:

I like the idea of using two mirrored USB3 sticks for the system, but is it bootable ?

USB sticks are essentially the same as USB disks, so you can boot from them without problems (except on very old mainboards, but everything from the last ten years should be fine). Some systems like SmartOS or ESXi even advertise it as best practice.

Some on the other hand (like FreeNAS) do not recommend it, because they are not customized for USB sticks and therefore constantly write to the disks and wear out cheaper sticks pretty fast (this is why so many Raspberry Pis fail early - the Linux system thinks it has an indestructible HDD and not some 5 EUR USB stick or SD card that is designed for infrequent writes like from a digital camera).

With SLC sticks (or real SSDs), you do not have these problems. Of course, they are more expensive, about 30 to 40 EUR for 16 GB sticks (MachExtreme MX-ES are about the only worthwile things in this sector). SSD can be cheaper (30 EUR for 32 GB), but you would need a USB adapter and they take up more space. You can use them outside of the case for quick backups/swaps or inside for access control (read: if you have children who like shiny toys).

it seems I don't need log/cache disks, should I use all 14 ports for disks ?

Depends on your needs and budget. If you use mirrors, you are flexible to add them later. If you use RAID-Zn, I would set the final amount before creating the pool, because you cannot easily add more. On the other hand, you might want to keep some ports free for backup (using slot-in caddys for 3.5" drives, for example) or for cache purposes if your needs change. It is up to you if you value space more than flexibility (and it depends on you much expansion cards your hardware supports).

Something like 2x7 RaidZ2 striped together ? If I do this and the controller fails, The pool will be failed, but if I replace the controller with an identical one, will it run again ?

Yes, and it works even if you use another controller, because everything is done in software. You just need to add enough disks so that each vdev works, and you can bring the pool back online.

In your case, if your 8-port controller fails, you need to add five (7 - 2) of those disks to the system in any way (the 8th is expendable anyway, because the other six disks are still running), for example with a 4 port controller and a single USB disk (not recommended, just to show that the connection is basically meaningless).

Usually you just replace the controller with the same model, because you know that configuration works without problems and performance is sufficient (with 8 disks per controller, the price of the controller itself is pretty small in relation anyway.

And if I do this, can I grow one part of the pool without touching the other one ?

You can grow the vdevs separately, but only as long as the pool itself is online (meaning after replacement of the controller and resilvering of any errors). Take into consideration that if you grow it "unbalanced", your data will not be rebalanced later on if the other vdev is expanded, except for any new and modified blocks (copy on write does not reorder data on read). This should be no problem for your performance needs, but I thought I'd mention it for completeness.

Answer from user121391 on superuser.com
🌐
Ars Technica
arstechnica.com › information-technology › 2020 › 05 › zfs-101-understanding-zfs-storage-and-performance
ZFS 101—Understanding ZFS storage and performance - Ars Technica
May 8, 2020 - In a ZFS pool, all data—including metadata—is stored in blocks. The maximum size of a block is defined for each dataset in the recordsize property. Recordsize is mutable, but changing recordsize won't change the size or layout of any blocks which have already been written to the dataset—only ...
🌐
Reddit
reddit.com › r › freenas › comments › 9kulbi › optimizing_zfs_storage_pool_layout_for_performance
r/freenas - Optimizing ZFS storage pool layout for performance
45.5k members in the freenas community. FreeNAS is now TrueNAS. For more information, use the navigation tabs on this sub and don't forget to join …
🌐
JRS Systems
jrs-s.net › 2018 › 08 › 17 › zfs-tuning-cheat-sheet
ZFS tuning cheat sheet – JRS Systems: the blog
Quick and dirty cheat sheet for ... new ZFS pool. Here are all the settings you’ll want to think about, and the values I think you’ll probably want to use. I am not generally a fan of tuning things unless you need to, but unfortunately a lot of the ZFS defaults aren’t optimal for most ...
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
ZFS Pool Optimization | Proxmox Support Forum
November 29, 2022 - Hi, I've a PVE box setup with two zfs pools : root@pve:~# zpool status -v ONE_Pool pool: ONE_Pool state: ONLINE scan: scrub in progress since Tue Nov 29 11:48:09 2022 194G scanned at 6.91G/s, 2.67M issued at 97.7K/s, 948G total 0B repaired, 0.00% done, no estimated completion time...
🌐
45HomeLab
forum.45homelab.com › general
Pool Layout for 15 Drives - General - 45HomeLab Forum
October 29, 2023 - Okay don’t want this to be a controversial post as I know there are MULTIPLE ways to do things. I’m quite new to ZFS and looking for an optimal way to setup the ZPool on the HL15. Using a ZFS Capacity Calculator (OpenZFS Capacity Calculator) and watching a few ZFS videos on YouTube I think ...
🌐
Reddit
reddit.com › r/homelab › how should i layout my zfs pool?
r/homelab on Reddit: How should i layout my zfs pool?

2 wide stripe, 5 disk in raidz2. 108TB capacity lots of speed. Okay, Upgradability.

🌐
DiskInternals
diskinternals.com › home › raid recovery › zfs raid expansion: how to expand raidz and zfs pools safely
ZFS RAID Expansion: How to Expand RAIDZ and ZFS Pools Safely | DiskInternals
June 3, 2025 - In this article, we’ll walk you through the essential steps and best practices for expanding ZFS pools, ensuring data integrity and system stability. Whether you're adding disks to accommodate greater capacity or optimizing performance, this guide is designed to help you navigate the complexities ...
🌐
Proxmox VE
pve.proxmox.com › wiki › ZFS_on_Linux
ZFS on Linux - Proxmox VE
There are a few factors to take into consideration when choosing the layout of a ZFS pool. The basic building block of a ZFS pool is the virtual device, or vdev. All vdevs in a pool are used equally and the data is striped among them (RAID0).
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
ZFS pool layout and limiting zfs cache size | Proxmox Support Forum
December 2, 2021 - Hi All, I have a server with `192G` RAM. I boot it of off a single SSD. I created a ZFS pool composed off 5 x 800GB enterprise SSDs in a `raidz` configuration. This pool is created just to create VMs/containers only and is used for local storage. Now when I checked the zfs summary I see this...
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
[SOLVED] - Maximizing ZFS performance | Proxmox Support Forum
October 17, 2022 - TL;DR: RAM cache pros/cons, and how to? Or SSD cache pros/cons, and how to? Optimal choice? I don’t understand how to get more out of my array, and maybe someone can help me here, or point me in the right direction. I have a RAIDZ2 setup on 2 vdevs 6x4TB. All drives are WD Red 4TB (WD40EFZX)...