First arg to zpool remove is the pool name, not the vdev type. Then if it’s not recognizing the name of the cache disk because it isn’t present, you can try using the GUID listed by zpool status -g.
Answer from quartsize on discourse.practicalzfs.comLevel1Techs
ZFS Guide for starters and advanced users. Concepts, pool config, tuning, troubleshooting - Wikis & How-to Guides - Level1Techs Forums
April 16, 2023 - With Wendell featuring ZFS and homeserver in quite a lot of videos on L1Techs, we as a community have regular forum threads dealing with homeserver and storage in all kinds of ways. The intention of this thread is to give an overview on what ZFS is, how to use it, why use it at all and how ...
Klara Systems
Choosing the Right ZFS Pool Layout - Klara Systems
June 13, 2025 - Setting up a ZFS pool involves a number of permanent decisions that will affect the performance, cost, and reliability of your data storage systems, so you really want to understand all the options at your disposal for making the right choices from the beginning.
Oracle
Recommended Storage Pool Practices - Oracle Solaris ZFS Administration Guide
Use whole disks to enable disk ... Creating pools on slices adds complexity to disk management and recovery. Use ZFS redundancy so that ZFS can repair data inconsistencies. ... Do not expand LUNs from extremely varied sizes, such as 128 MB to 2 TB, to keep optimal metaslab ...
High Availability
ZFS Tuning Recommendations | High Availability
The metadata is stored along with the actual data in the pools vdevs, meaning that whenever metadata is required ZFS must first read the metadata from those vdevs, followed by another read to get the actual data; also note that ZFS will have to scan multiple vdevs when searching/reading the ...
Readthedocs
Performance tuning — openzfs latest documentation
In addition, a dedicated cache ... add poolname cache devicename. The cache device is managed by the Level 2 ARC (L2ARC) which scans entries that are next to be evicted and writes them to the cache device. The data stored in ARC and L2ARC can be controlled via the primarycache and secondarycache ZFS properties ...
Medium
A Gentle Introduction to ZFS, Part 4: Optimizing and Maintaining a ZFS Storage Pool for NAS | by Simeon Trieu | Medium
December 26, 2021 - In the previous articles, we’ve built the ZFS kernel module to access the new dRAID vdev, setup and mounted our ZFS storage pool, then shared it with the network through NFS. Now, we will optimize…
OpenZFS
Workload Tuning — OpenZFS documentation
Since many devices misreport their sector sizes and ZFS relies on the block device layer for this information, each platform has developed different workarounds. The platform-specific methods are as follows: ... -o ashift= is convenient, but it is flawed in that the creation of pools containing top level vdevs that have multiple optimal ...
TrueNAS
Picking a ZFS Pool Layout to Optimize Performance | TrueNAS Community
October 3, 2018 - Part 1 of the blog post is here, covering basics, striped pools, and mirrored vdevs: https://www.ixsystems.com/blog/zfs-pool-performance-1/ Part 2 covers RAID-Z and some example workload scenarios...
Ars Technica
ZFS 101—Understanding ZFS storage and performance - Ars Technica
May 8, 2020 - In a ZFS pool, all data—including metadata—is stored in blocks. The maximum size of a block is defined for each dataset in the recordsize property. Recordsize is mutable, but changing recordsize won't change the size or layout of any blocks which have already been written to the dataset—only ...
Reddit
r/freenas - Optimizing ZFS storage pool layout for performance
45.5k members in the freenas community. FreeNAS is now TrueNAS. For more information, use the navigation tabs on this sub and don't forget to join …
JRS Systems
ZFS tuning cheat sheet – JRS Systems: the blog
Quick and dirty cheat sheet for ... new ZFS pool. Here are all the settings you’ll want to think about, and the values I think you’ll probably want to use. I am not generally a fan of tuning things unless you need to, but unfortunately a lot of the ZFS defaults aren’t optimal for most ...
Proxmox
ZFS Pool Optimization | Proxmox Support Forum
November 29, 2022 - Hi, I've a PVE box setup with two zfs pools : root@pve:~# zpool status -v ONE_Pool pool: ONE_Pool state: ONLINE scan: scrub in progress since Tue Nov 29 11:48:09 2022 194G scanned at 6.91G/s, 2.67M issued at 97.7K/s, 948G total 0B repaired, 0.00% done, no estimated completion time...
45HomeLab
Pool Layout for 15 Drives - General - 45HomeLab Forum
October 29, 2023 - Okay don’t want this to be a controversial post as I know there are MULTIPLE ways to do things. I’m quite new to ZFS and looking for an optimal way to setup the ZPool on the HL15. Using a ZFS Capacity Calculator (OpenZFS Capacity Calculator) and watching a few ZFS videos on YouTube I think ...
DiskInternals
ZFS RAID Expansion: How to Expand RAIDZ and ZFS Pools Safely | DiskInternals
June 3, 2025 - In this article, we’ll walk you through the essential steps and best practices for expanding ZFS pools, ensuring data integrity and system stability. Whether you're adding disks to accommodate greater capacity or optimizing performance, this guide is designed to help you navigate the complexities ...
Proxmox VE
ZFS on Linux - Proxmox VE
There are a few factors to take into consideration when choosing the layout of a ZFS pool. The basic building block of a ZFS pool is the virtual device, or vdev. All vdevs in a pool are used equally and the data is striped among them (RAID0).
Proxmox
ZFS pool layout and limiting zfs cache size | Proxmox Support Forum
December 2, 2021 - Hi All, I have a server with `192G` RAM. I boot it of off a single SSD. I created a ZFS pool composed off 5 x 800GB enterprise SSDs in a `raidz` configuration. This pool is created just to create VMs/containers only and is used for local storage. Now when I checked the zfs summary I see this...
Proxmox
[SOLVED] - Maximizing ZFS performance | Proxmox Support Forum
October 17, 2022 - TL;DR: RAM cache pros/cons, and how to? Or SSD cache pros/cons, and how to? Optimal choice? I don’t understand how to get more out of my array, and maybe someone can help me here, or point me in the right direction. I have a RAIDZ2 setup on 2 vdevs 6x4TB. All drives are WD Red 4TB (WD40EFZX)...