First arg to zpool remove is the pool name, not the vdev type. Then if it’s not recognizing the name of the cache disk because it isn’t present, you can try using the GUID listed by zpool status -g. Answer from quartsize on discourse.practicalzfs.com
🌐
Klara Systems
klarasystems.com › home › choosing the right zfs pool layout
Choosing the Right ZFS Pool Layout - Klara Systems
November 16, 2025 - By walking the entire pool, verifying… ... ZFS is built to scale. With the right VDEV layout, adding VDEVs delivers more IOPS, quicker…
🌐
Level1Techs
forum.level1techs.com › wikis & how-to guides
ZFS Guide for starters and advanced users. Concepts, pool config, tuning, troubleshooting - Wikis & How-to Guides - Level1Techs Forums
April 16, 2023 - With Wendell featuring ZFS and homeserver in quite a lot of videos on L1Techs, we as a community have regular forum threads dealing with homeserver and storage in all kinds of ways. The intention of this thread is to give an overview on what ZFS is, how to use it, why use it at all and how to make the most out of your storage hardware as well as giving advice on using dedicated devices like CACHE or LOG to improve performance.
🌐
Oracle
docs.oracle.com › cd › E23823_01 › html › 819-5461 › zfspools-4.html
Recommended Storage Pool Practices - Oracle Solaris ZFS Administration Guide
Use whole disks to enable disk ... Creating pools on slices adds complexity to disk management and recovery. Use ZFS redundancy so that ZFS can repair data inconsistencies. ... Do not expand LUNs from extremely varied sizes, such as 128 MB to 2 TB, to keep optimal metaslab ...
🌐
Manuals+
manuals.plus › home › truenas › truenas zfs storage pool layout white paper
TrueNAS ZFS Storage Pool Layout White Paper
August 8, 2025 - This white paper delves into the critical aspects of designing an effective ZFS storage pool. It provides a comprehensive overview of various virtual device (vdev) configurations, including striping, mirroring, and RAIDZ levels (RAIDZ1, RAIDZ2, RAIDZ3). Understanding these layouts is crucial for administrators aiming to maximize system performance, storage space efficiency, and fault tolerance based on specific workload requirements. Metrics for quantifying pool performance: IOPS, streaming speeds, space efficiency, and fault tolerance.
🌐
TrueNAS Community
forums.truenas.com › truenas general
Need advice on optimal 10-drive ZFS pool layout for mixed media & personal data - TrueNAS General - TrueNAS Community Forums
August 9, 2025 - Hey all, I’m getting ready to rebuild my main ZFS pool in a 10-bay HL15, and I’m stuck deciding the best layout for my use case. This server is my all-in-one box and handles: Large media library (movies, TV shows, mus…
Top answer
1 of 1
1

Most of your points are correct, so I just focus on the rest:

  1. Using an SLOG device for the ZIL does only help you with small synced writes, so it is pretty mandatory if you want to store virtual machines on it and pretty useless in most other home use cases, especially backups and streaming media. As you can always later on add it and remove it, you should start without and then add only if necessary.

  2. L2ARC can increase your read performance, but it is slower than RAM, needs extra RAM and only helps if the same data is read. Again, bad for streaming a whole movie or music, but good if you host a website that is heavily accessed or have hundreds of users accessing file shares. Rule of thumb is: first max out your RAM (depending on your board 32, 64, 128 or 256 GB most likely), then think about L2ARC.

  3. ZIL and L2ARC on the same device is usually not a good idea, as their needs are directly opposed:

    • ZIL is written to constantly for small random synced IO (large and sequential IO bypasses it, async IO of any kind does not use it at all), which means you want an SSD with very low write latency (Intel is the only vendor I've found that specifies this characteristic even for the cheaper consumer SSDs), acceptable write IOPS (nearly all SSDs are sufficient here), and high amount of TBW so your SSD does not die each year from exhaustion. For size, < 10 GB is usually enough for small systems. Mirroring is preferred to prevent data loss when power and SSD fail at the same time.
    • L2ARC on the other hand needs to be several times larger (> 64 GB is common, depending on RAM), is seldomly written to but read often, so you want high read IOPS, acceptable read latency and don't care about TBW that much. Mirroring is a waste of money in most cases, as it is only a cache device and can be lost and recreated without problems.
  4. A single root pool is of course possible, but you save yourself some headaches if you mirror it. As it usually is hit not that much, two slow disks or even USB devices (each mainboard has at least two USB ports as headers internally) are perfectly fine for home use and you gain another usable disk slot. Especially when running without UPS two rpool devices really give you peace of mind.

  5. Your pool size is correct, but it may be an option to go for 12 disks with either 2x Z2 (6 disks each) or 1x Z3 (12 disks each). As a rule of thumb, when using Z1/2/3, you should populate all your available disk slots at first, because while upgrading disk size one by one is trivial, adding more disks later on is impossible.

  6. I don't know about Linux (should work fine), but have you looked at other illumos-based systems? OmniOS is small, simple and stable and can be customized to your needs (it also includes KVM and LX-branded zones). SmartOS is similar, but focused heavily on zones (containers), so you can run all your services independent from one another and can even run Linux guests in those zones for the few services that are not available on Solaris. There are also Delphix and NexentaStore Community Edition, but I have not tested them.


My personal suggestion:

  • Use whatever operating system your are comfortable with (if you like the stability of Solaris, try OmniOS or if you want virtualization, try SmartOS)
  • Use mirrored rpools on USB disks (USB3 HDDs or USB3 sticks with SLC memory) to expose more slots for disks
  • Use 6 ports from the mainboards and 6 ports from the HBA card so you can lose a controller and your system keeps running
  • 2 free ports can be used in the future for SLOG or L2ARC devices depending on your needs
  • Layout (12 is a very nice number, 16 would be next best because most controllers are 4 or 8 port):
    • If you need maximum performance: 6x2 mirrors, each on both controllers
    • If you need maximum resilience: 4x3 mirrors or 1x RAID Z3(12) or 2x RAID Z2(6)
    • If you need maximum space: 1x RAID Z2(12)
  • Maximize RAM first, then anything else

Regarding your follow-up-questions from comments:

I like the idea of using two mirrored USB3 sticks for the system, but is it bootable ?

USB sticks are essentially the same as USB disks, so you can boot from them without problems (except on very old mainboards, but everything from the last ten years should be fine). Some systems like SmartOS or ESXi even advertise it as best practice.

Some on the other hand (like FreeNAS) do not recommend it, because they are not customized for USB sticks and therefore constantly write to the disks and wear out cheaper sticks pretty fast (this is why so many Raspberry Pis fail early - the Linux system thinks it has an indestructible HDD and not some 5 EUR USB stick or SD card that is designed for infrequent writes like from a digital camera).

With SLC sticks (or real SSDs), you do not have these problems. Of course, they are more expensive, about 30 to 40 EUR for 16 GB sticks (MachExtreme MX-ES are about the only worthwile things in this sector). SSD can be cheaper (30 EUR for 32 GB), but you would need a USB adapter and they take up more space. You can use them outside of the case for quick backups/swaps or inside for access control (read: if you have children who like shiny toys).

it seems I don't need log/cache disks, should I use all 14 ports for disks ?

Depends on your needs and budget. If you use mirrors, you are flexible to add them later. If you use RAID-Zn, I would set the final amount before creating the pool, because you cannot easily add more. On the other hand, you might want to keep some ports free for backup (using slot-in caddys for 3.5" drives, for example) or for cache purposes if your needs change. It is up to you if you value space more than flexibility (and it depends on you much expansion cards your hardware supports).

Something like 2x7 RaidZ2 striped together ? If I do this and the controller fails, The pool will be failed, but if I replace the controller with an identical one, will it run again ?

Yes, and it works even if you use another controller, because everything is done in software. You just need to add enough disks so that each vdev works, and you can bring the pool back online.

In your case, if your 8-port controller fails, you need to add five (7 - 2) of those disks to the system in any way (the 8th is expendable anyway, because the other six disks are still running), for example with a 4 port controller and a single USB disk (not recommended, just to show that the connection is basically meaningless).

Usually you just replace the controller with the same model, because you know that configuration works without problems and performance is sufficient (with 8 disks per controller, the price of the controller itself is pretty small in relation anyway.

And if I do this, can I grow one part of the pool without touching the other one ?

You can grow the vdevs separately, but only as long as the pool itself is online (meaning after replacement of the controller and resilvering of any errors). Take into consideration that if you grow it "unbalanced", your data will not be rebalanced later on if the other vdev is expanded, except for any new and modified blocks (copy on write does not reorder data on read). This should be no problem for your performance needs, but I thought I'd mention it for completeness.

🌐
High Availability
high-availability.com › docs › ZFS-Tuning-Guide
ZFS Tuning Recommendations | High Availability
In a shared storage cluster any pools utilising special devices must have those devices in the shared storage not held locally in the cluster nodes themselves. ... Additionally special devices may also be provisioned to store small files under a specified block size dictated by the ZFS property ...
Find elsewhere
🌐
TrueNAS
truenas.com › resources › fundamentals
Picking a ZFS Pool Layout to Optimize Performance | TrueNAS Community
October 3, 2018 - Part 1 of the blog post is here, covering basics, striped pools, and mirrored vdevs: https://www.ixsystems.com/blog/zfs-pool-performance-1/ Part 2 covers RAID-Z and some example workload scenarios...
🌐
Ixsystems
static.ixsystems.co › uploads › 2020 › 09 › ZFS_Storage_Pool_Layout_White_Paper_2020_WEB.pdf pdf
ZFS STORAGE POOL LAYOUT Storage and Servers Driven by Open Source
ZFS storage · pools are comprised of one or more virtual devices, or vdevs. Each vdev is comprised of one or more storage · providers, typically physical hard disks. All disk-level redundancy is configured at the vdev level. That is, the RAID · layout is set on each vdev as opposed to on ...
🌐
TrueNAS Community
forums.truenas.com › resources
ZFS Storage Pool Layout - Resources - TrueNAS Community Forums
April 16, 2024 - This resource was originally created by user: @Davvo on the TrueNAS Community Forums Archive. https://www.truenas.com/community/resources/zfs-storage-pool-layout.201/download [1] This amazing document, created by iXsystems in February 2022 as a “White Paper”, cleanly explains how to qualify pool performance touching briefly on how ZFS stores data and presents the advantages, performance and disadvantages of each pool layout (striped vdev, mirrored vdev, raidz vdev).
🌐
Unraid Docs
docs.unraid.net › unraid os › advanced configurations › optimize storage › zfs storage
zfs-storage | Unraid Docs
For even better performance, you can optimize within those ranges by choosing configurations where the number of data disks (total disks minus parity disks) is a power of 2 (e.g., 2, 4, 8, 16).
🌐
JRS Systems
jrs-s.net › 2018 › 08 › 17 › zfs-tuning-cheat-sheet
ZFS tuning cheat sheet – JRS Systems: the blog
Or in a separate dataset optimized for that. Excellent guide, I’ve come back to this in my bookmarks many times now. Do you have a best practice for volblocksize, perhaps in regards to proxmox? Sorry, zfs beginner here. So for a regular homelab hosting multiple personal file types (documents, jpg photos, movies) on spinning HDDs, would these tuning tips still apply? Creating a pool: zpool create -o ashift=12 tank mirror sdc sdd
🌐
OpenZFS
openzfs.github.io › openzfs-docs › Performance and Tuning › Workload Tuning.html
Workload Tuning — OpenZFS documentation
June 11, 2025 - Since many devices misreport their sector sizes and ZFS relies on the block device layer for this information, each platform has developed different workarounds. The platform-specific methods are as follows: ... -o ashift= is convenient, but it is flawed in that the creation of pools containing top level vdevs that have multiple optimal ...
🌐
Medium
simeontrieu.medium.com › a-gentle-introduction-to-zfs-part-4-optimizing-and-maintaining-zfs-storage-pools-6d2def485cab
A Gentle Introduction to ZFS, Part 4: Optimizing and Maintaining a ZFS Storage Pool for NAS | by Simeon Trieu | Medium
January 12, 2021 - In the previous articles, we’ve built the ZFS kernel module to access the new dRAID vdev, setup and mounted our ZFS storage pool, then shared it with the network through NFS. Now, we will optimize and maintain our ZFS storage pool, to both increase performance and to create safety from data loss.
🌐
Reddit
reddit.com › r/homeserver › optimal zfs pool configuration for home server with truenas scale?
r/HomeServer on Reddit: Optimal ZFS Pool Configuration for Home Server with TrueNAS Scale?
September 15, 2024 -

Hi everyone,

I'm setting up my first home server using TrueNAS Scale and I would like some feedback on my proposed ZFS pool configuration. Here's what I'm planning:

  1. Data Pool: 8x 4TB drives in RAIDZ2

    • Expected usable capacity: ~24TB (32TB total minus 8TB for parity?)

    • Redundancy: Tolerates up to 2 drive failures?

    • Purpose: Store media, files, and virtual machine data.

  2. Backup Pool (two possible options):

    • Option 1 - ZFS Mirror: 3x 16TB drives

      • Expected usable capacity: 16TB.

      • Redundancy: 1:1 mirror for data protection.

    • Option 2 - RAIDZ1: 3x 16TB drives

      • Expected usable capacity: ~32TB (2 drives usable, 1 for parity).

      • Redundancy: Tolerates 1 drive failure.

Does this setup make sense for maximizing both storage and redundancy, or would you suggest any changes for better performance and data protection, especially regarding the backup pool configuration?

Thanks in advance for your advice!

🌐
YouTube
youtube.com › watch
UNRAID ZFS Pools and Shares Performance Optimizing - YouTube
Unraid ZFS performance over SMB shares can really sail provided you follow certain steps and have the right hardware tuned up. I step you through the process...
Published   August 23, 2023
🌐
45HomeLab
forum.45homelab.com › general
Pool Layout for 15 Drives - General - 45HomeLab Forum
October 29, 2023 - Okay don’t want this to be a controversial post as I know there are MULTIPLE ways to do things. I’m quite new to ZFS and looking for an optimal way to setup the ZPool on the HL15. Using a ZFS Capacity Calculator (OpenZ…
🌐
Techno Tim
technotim.com › posts › zfs-arc-tuning-truenas
Optimizing ZFS for Media, Apps, Databases, and Special VDEVs on TrueNAS SCALE | Techno Tim
November 29, 2025 - Over the last few weeks I completely re-tuned my TrueNAS SCALE ZFS layout for maximum performance around ARC, L2ARC, special VDEVs, recordsize, and how each dataset interacts with them. I’ve been chasing this idea of having one large hybrid pool with lots of spinning disks backed by fast ...
🌐
Oracle
docs.oracle.com › cd › E23824_01 › html › E24456 › storage-4.html
Recommended ZFS Storage Pool Practices - Transitioning From Oracle Solaris 10 to Oracle Solaris 11
For better performance, use individual disks or at least LUNs made up of just a few disks. By providing ZFS with more visibility into the LUN setup, ZFS is able to make better I/O scheduling decisions. Mirrored storage pools – Consume more disk space but generally perform better with small ...