Sizing L2ARC effectively requires balancing cache capacity, system RAM, and workload characteristics. The key is understanding that L2ARC uses system RAM to index its data, which directly impacts the Adaptive Replacement Cache (ARC) performance.

  • RAM Overhead Formula: For every 1 GB of L2ARC, you need approximately 5–10 GB of RAM for the index metadata. This is calculated as:
    (L2ARC size in KB) / (typical recordsize in KB) × 70 bytes = ARC header size in RAM.
    For example, a 400 GB L2ARC with a 64 KB recordsize requires ~43 GB of RAM for indexing — a significant portion of your total system memory.

  • Performance Trade-off: L2ARC is only beneficial if your workload has high random read access to a hot working set that doesn’t fit in the ARC. If your ARC is already under pressure, adding L2ARC can hurt performance by consuming ARC memory needed for metadata.

  • Guidelines:

    • Minimum RAM: Avoid L2ARC with less than 32 GB of RAM. For meaningful benefit, aim for 64 GB or more.

    • Size Ratio: A common rule of thumb is 1:4 to 1:5 L2ARC to ARC size, but this is only safe with sufficient RAM.

    • Use Case: L2ARC shines for random read-heavy workloads (e.g., databases, VMs), not streaming or sequential I/O.

    • Device Choice: Use fast, enterprise-grade SSDs (preferably NVMe) for L2ARC. Avoid consumer drives due to write endurance and performance issues.

  • Best Practice: Start small (e.g., 120–240 GB), monitor arcstats for hit rates and memory usage, then scale incrementally. Never add L2ARC just because hardware is available — ensure it matches your actual workload.

💡 Pro Tip: Use cat /proc/spl/kstat/zfs/arcstats | grep l2_hdr_size (Linux) or sysctl zfs.arc.l2_hdr_size (FreeBSD) to monitor actual RAM usage by L2ARC headers.

First, I really suggest you to reconsider your layout for pools n.2 and n.3: a 3-way mirror is not going to give you low latency, nor high bandwidth. Rather than an expensive 1 TB NVMe disk for L2ARC (which, by the way, is unbalanced due to the small 32 GB ARC), I would use more 7200 RPM disks in a RAID10 fashion or even cheaper but reliable SSDs (eg: Samsung 850 Pro/Evo or Crucial MX500).

At the very least, you can put all disks on a single RAID10 pool (with SSD L2ARC) and segment the single pool by the virtue of multiple datasets.

That said, you can specify how ARC/L2ARC should be used on a dataset-by-dataset base by using the primarycache and secondarycache options:

  • zfs set primarycache=none <dataset1> ; zfs set secondarycache=none <dataset1> will disable any ARC/L2ARC caching for the dataset. You can also issue zfs set logbias=throughput <dataset1> to privilege througput rather than latency during write operations;
  • zfs set primarycache=metadata <dataset2> will enable metadata-only caching for the second dataset. Please note that L2ARC is feed by the ARC; this means that if ARC is caching metadata only, the same will be true for L2ARC;
  • leave ARC/L2ARC default option for the third dataset.

Finally, you can set your ZFS instance to use more than (the default of) 50% of your RAM for ARC (look for zfs_arc_max in the module man page)

Answer from shodanshok on serverfault.com
🌐
Proxmox
forum.proxmox.com › home › forums › proxmox virtual environment › proxmox ve: installation and configuration
ZFS L2ARC sizing and memory requirements | Proxmox Support Forum
September 17, 2015 - According to this ZFS on Linux issue, one guy with 4GB of ARC space has problems managing a 125GB L2ARC. https://github.com/zfsonlinux/zfs/issues/1420 There is of course many info on the web, advice for ARC/L2ARC ratio ranges from 1:5 to 1:40, and ...
🌐
Reddit
reddit.com › r/zfs › optimal size for ssd l2arc
r/zfs on Reddit: Optimal size for SSD L2ARC
October 12, 2017 -

Hi,

Is there a rule of thumb for choosing a the size of a SSD backed L2ARC for a HDD based RAID1 zpool (2x2TB)?

Thanks!

Top answer
1 of 5
4

The "optimal" size will depend on whether you do a lot of random-access to the data or just stream it in large chunks, the relative cost of hard disk, SSD, and RAM, and how much money you are prepared to throw at the problem for diminishing returns. 2TB is small enough that it's plausible to go all-SSD and not need an L2ARC at all.

FreeNAS claims in https://www.ixsystems.com/documentation/freenas/11.2-U5/zfsprimer.html that "L2ARC should not be added to a system with less than 32 GiB of RAM, and the size of an L2ARC should not exceed ten times the amount of RAM." I'm not sure I agree with either of those "should not"s, but it's a useful starting point.

ZFS on Linux 0.8 also adds the "special allocation class", which allows you to create a hybrid pool where bulky files go onto HDD and everything else goes onto SSD. My limited initial testing suggest that this is much more effective at boosting performance than an L2ARC.

2 of 5
3

You should not use L2ARC until the memory in your system is maxed out. Then, once it is maxed out if your working set of hot, random read data is still bigger than memory, but small enough to fit on an SSD... then consider L2ARC.

The problem w/ L2ARC is that each block cached uses up a fixed amount of memory. The more blocks, the more memory. So L2ARC can actually make things worse because SSDs are slower than memory and you're using up memory to have the L2ARC.

You almost certainly don't want L2ARC.

And honestly, w/ a pool that small... just get a pair of 2T SSDs instead.

🌐
TrueNAS Community
truenas.com › forums › archives › freenas archive › development [archived] › performance
How large L2Arc for 64GB | TrueNAS Community
August 8, 2016 - I was able to Google this for you. Read below the relevant section from the ZFS Primer "As a general rule of thumb, an L2ARC should not be added to a system with less than 64 GB of RAM and the size of an L2ARC should not exceed 5x the amount of RAM. In some cases, it may be more efficient to ...
🌐
Klara Systems
klarasystems.com › home › openzfs: all about the cache vdev or l2arc
OpenZFS: All About the Cache Vdev or L2ARC - Klara Systems
November 16, 2025 - With default values, ... from the tail (eviction) end of the ARC. If you increase l2arc_write_max to 16MiB, this doubles the size of the feed area as well, to 32MiB....
🌐
Reddit
reddit.com › r/zfs › l2arc scoping -- how much arc does l2arc eat on average?
r/zfs on Reddit: L2ARC Scoping -- How much ARC does L2ARC eat on average?
March 11, 2015 -

Sorry if this is a repeated question, but I couldn't find much with a search.

I like to think this question is pretty straight forward, and I'm not looking for an exact answer...just an "about" answer.

How much ARC is eaten by the L2ARC Mapping, when using an L2ARC device? I've heard it's around 400Bytes of ARC per block of L2ARC, but is that true (again, not looking for an exact, but just an approximation).

If 400 Bytes of ARC per block of L2ARC is true, then my calculations say that I use 128 Kilobyte blocks, I would eat about 3.125 Megabytes of ARC per 1GB of L2ARC. Likewise, using 64 Kilobyte blocks would, I would eat about 6.25 Megabytes of ARC per 1GB of L2ARC. Lastly, using 16 Kilobytes blocks, I would eat about 25 Megabytes of ARC per 1GB of L2ARC.

My system currently has 12 5TB 7200RPM drives, and is about to be rebuilt in 2 6-drive raidz2 vdev's together in a pool (and might double that for 24 drives in a single pool with 4 vdev's). I have 96GB of RAM, and dual Xeon L56540's in the system...so it has decent hardware. I suspect ~40TB of usable storage (I know it will actually be around 36TB, but that's fine), and I will likely limit my ARC around 72-84GB (leaving some for the system since I'm running ZFSonLinux and I've had less than ideal results with ZFSonLinux releasing RAM back when the system needs it, plus I will be running 2-3 Linux Containers and Crashplan for backups).

My dataset is mostly WORM (Write Once, Read Many) type data, with 80% being large video files. Won't be doing dedupe, but will either stick with lz4 or gzip-6 compression. I want to use an Intel DC S3610 SSD for my L2ARC, but I'm not sure if 400GB is too large or not. I figure I will end up with 64 Kilobyte blocks (or maybe even 128 Kilobyte), so that would be like 2.5GB of lost ARC for a 400GB L2ARC. Does this seem right?? If so, would I see much benefit going up to an 800GB ARC (single SSD drive)? I can comfortably give up 5GB of ARC for a 800GB L2ARC, but if I'd rather not give up like 10GB+ of ARC. If I go with an 800GB L2ARC drive, I'd probably drop to an Intel S3500 instead of the S3610.

Thoughts, advice? I'm also considering a 200GB Intel DC S3610 for a SLOG device (most access to the storage is via NFS).

Top answer
1 of 3
3

You can calculate this. The formula is:

(L2ARC size in kilobytes) / (typical recordsize -- or volblocksize -- in kilobytes) * 70 bytes = ARC header size in RAM.

So let's take one of our modern ZS4-4 systems with four 1600GB L2ARC SSDs and plug in some values assuming a 4k VM workload over iSCSI. 6400GB is 6,400,000,000 bytes, more or less:

6,400,000,000,000 / 4096 * 70 bytes = 109,375,000,000

That's around 100 gigabytes of RAM, just to store L2ARC headers on a ZS4-4. The important part, of course, is knowing what your typical recordsize/volblocksize is in order to determine header sizing.

I usually use 4k for "near-worst-case". In reality, most people use 8k, 16k, or 32k or even larger.

EDIT: Fixed my numbers; I was right about the conclusion (~100GB L2ARC headers), but several orders of magnitude off in my example numbers.

2 of 3
2

I'm going to pretend that your work load is mostly playing videos, since that is what you say 80% of your data is.

In that case, I can't imagine L2ARC helping any. Mostly, you're not going to be playing the same video over and over. And honestly, your 12x vdev can probably do this right now no problem. Your 4x 6x raidz2 would handle it fine too, as would a 24x raidz3 vdev I bet.

And for SLOG sizing, you want something like async commit interval (~5s by default IIRC) * write speed worth of space. So if you're on a gigabit network (and that is where most of your writes come from), 100mb * 10 is only ~1G of SLOG. Maybe you do some disk to disk copies sometimes so 16x because why not and you're still only talking 16G of SSD for SLOG. Double it and you're still only talking 32G. But you'd want it mirrored and you'd want to make sure it has the features to survive a power failure (super caps or whatever).

I tried both L2ARC and SLOG on one of my pools, but it just didn't have any meaningful impact so I pulled them out. For sure, some workloads will benefit greatly or even require it, but mine was not one of those use cases.

With 24 disks, have you considered a single raidz3 vdev instead of 4x 6x raidz2 vdevs? Maybe you know you'll need the io performance, but my 2x 12x raidz3 vdev performs fine for all the video playing my server does. Enough that my next expansion will probably be 12x 8T SMR disks in a new "cold storage" pool, then I'll convert my 2x 12x 4T raidz3 into a 24x 4T raidz3.

🌐
TrueNAS Community
forums.truenas.com › truenas general
L2ARC Size in 2025? - TrueNAS General - TrueNAS Community Forums
March 18, 2025 - I have my motherboard maxed at 128gb ddr5 ecc ram. I have one main storage pool of twelve 24tb harddrives in Raidz3. This pool serves anything from Plex to Adobe photo and video storage. I have seen the 5x rule and others say 3x to 8x but some say rules have changed and some articles were years ago.
🌐
TrueNAS Community
truenas.com › forums › archives › freenas (legacy software releases) › freenas help & support › new to freenas?
Formula for size of L2ARC needed | TrueNAS Community
January 21, 2014 - Per 5GB of L2ARC you're going to want about 1GB of ARC, though that is merely a loose guideline that throws together a bunch of assumptions such as a pool with a fairly random distribution of blocksizes, and not having puttered with the ARC metadata limit. You do not want to pass 10GB of L2ARC ...
🌐
GitHub
github.com › openzfs › zfs › discussions › 13342
My l2arc is smaller than I think it should be. · openzfs/zfs · Discussion #13342
The main 4 10TB HDDs are setup in a striped mirror, and I have a 2TB NVMe drive as my l2arc cache device. ... ARC size (current): 3.4 % 1.7 GiB Target size (adaptive): 4.1 % 2.0 GiB Min size (hard limit): 4.1 % 2.0 GiB Max size (high water): 24:1 48.0 GiB Most Frequently Used (MFU) cache size: 87.7 % 1.0 GiB Most Recently Used (MRU) cache size: 12.3 % 143.7 MiB Metadata cache size (hard limit): 75.0 % 36.0 GiB Metadata cache size (current): 1.8 % 674.4 MiB Dnode cache size (hard limit): 10.0 % 3.6 GiB Dnode cache size (current): 1.5 % 55.2 MiB
Author   openzfs
Find elsewhere
🌐
SNIA
snia.org › sites › default › files › SDC › 2019 › presentations › File_Systems › McKenzie_Ryan_Best_Practices_for_OpenZFS_L2ARC_in_the_Era_of_NVMe.pdf pdf
Best Practices for OpenZFS L2ARC in the Era of NVMe
ARC / L2ARC Architecture · ●ARC/L2 “Blocks” are variable size: ○=volblock size for zvol data blocks · ○=record size for dataset data blocks · ○=indirect block size for metadata blocks · ●Smaller volblock/record sizes yield more · metadata blocks (overhead) in the system · ...
🌐
Brendan Gregg
brendangregg.com › blog › 2008-07-22 › zfs-l2arc.html
ZFS L2ARC
I've been busy with its development ... chance to post about it. This post will show a quick example and answer some basic questions. The "ARC" is the ZFS main memory cache (in DRAM), which can be accessed with sub microsecond latency. An ARC read miss would normally read from disk, at millisecond latency (especially random reads). The L2ARC sits in-between, extending the main memory cache using fast storage devices, such as flash memory based SSDs (solid state disks). Some example sizes to put this ...
🌐
H|ard|Forum
hardforum.com › [h]ard|ware › ssds & data storage
ZFS L2ARC sizing | [H]ard|Forum
June 4, 2014 - Basically, unless the ratio is outrageous (like 400GB of l2arc and 8GB arc) you should be fine. ... I think I read somewhere _Gea saying that 256GB is too much memory. ... Lets say that your average block size is 32kb, then you would need around 8MB per GB, or 8GB per TB.
🌐
Reddit
reddit.com › r/freenas › how to set maximum l2arc size for ssd?
r/freenas on Reddit: How to set maximum L2ARC size for SSD?
October 6, 2020 -

I'm thinking about adding L2ARC to my newly-built FreeNAS setup and as such, I'm looking to purchase an NVME SSD.

However, after consulting the documentation, it is apparent that having an oversized L2ARC can be detrimental to system performance.

Below are some examples of advice from FreeNAS documentation that I'm referring to:

As for capacity, 5x to 20x larger than RAM size is a good guideline.

Source: https://www.ixsystems.com/blog/hardware-guide/

...the size of an L2ARC should not exceed five times the amount of RAM.

Source: https://www.ixsystems.com/documentation/freenas/9.10/zfsprimer.html

My current system has 32GB of RAM. Assuming that I heed the advice of having L2ARC be 5x the size of RAM, this would mean I should have an L2ARC of 160GB. However, most NVME SSD lineups these days have a minimum capacity of 250GB.

So my question is: If I purchase a 250GB SSD for my L2ARC, is there a way for me to specify to the system to only use 160GB (or any other lower value)? Is it as easy as partitioning a 160GB section of my SSD to use for L2ARC?

Bear in mind, the above values I've provided are hypothetical. If it is trivial to partition a smaller section of my SSD to be for my L2ARC, then I probably will end up purchasing a much larger SSD than 250GB. This way, I can increase my L2ARC capacity (without buying another SSD) if I decide to increase the amount of RAM in my system in the future.

🌐
TrueNAS
truenas.com › docs › references › l2arc
L2ARC | TrueNAS Documentation Hub
December 10, 2025 - Cached drives are always striped, not mirrored. To increase an existing L2ARC size, stripe another cache device with it.
🌐
GitHub
github.com › openzfs › zfs › discussions › 14782
L2ARC only using ~200GB of a 1TB device · openzfs/zfs · Discussion #14782
Now my understanding is that with my record size (128K) the L2ARC indexes (which take up space in ARC) should only be about 550MB of ARC using the formula 1TB / 128kB × 70 byte = 550MB.
Author   openzfs
🌐
Oracle
docs.oracle.com › cd › E51475_01 › html › E52873 › analytics__statistics__cache_l2arc_size__cache___l2arc_size.html
Cache: L2ARC Size - Oracle® ZFS Storage Appliance Analytics Guide
When troubleshooting L2ARC warmup. If the size is small, check that the workload applied should be populating the L2ARC using the statistic Cache ARC evicted bytes broken down by L2ARC state, and use the Protocol breakdowns such as by size and by offset to confirm that the workload is of random I/O.
🌐
iXsystems
ixsystems.com › documentation › freenas › 11.2 › zfsprimer.html
24. ZFS Primer — FreeNAS®11.2-U3 User Guide Table of Contents
In some cases, it may be more efficient ... monitor its effectiveness using tools such as arcstat. To increase the size of an existing L2ARC, stripe another cache device with it....
🌐
FreeBSD
forums.freebsd.org › base system › storage
Suggested ZIL and L2ARC sizes | The FreeBSD Forums
January 19, 2013 - L2 ARC Size: (Adaptive) 163.34 GiB Header Size: 0.42% 704.17 MiB In my case, it is about 1GB RAM to 200GB of L2ARC.
🌐
Klara Systems
klarasystems.com › home › zfs performance tuning in the real world: arc, l2arc, and slog
ZFS Performance Tuning in the Real World: ARC, L2ARC, and SLOG
November 16, 2025 - If ARC is delivering strong hit ratios, adding an L2ARC or altering tunables may provide little benefit. ARC exposes parameters through vfs.zfs.arc_* sysctls (FreeBSD) or /sys/module/zfs/parameters (Linux). Important tunables include: arc_max and arc_min: define upper and lower bounds for ARC size. Adjusting arc_max can be useful in environments where ZFS should not grow to ...
🌐
Level1Techs
forum.level1techs.com › operating systems & open source › linux
Right-Sizing ZFS Drives for Boot, SLOG, Special, L2ARC - Linux - Level1Techs Forums
March 15, 2024 - My first question is: is this the right place to ask about ZFS? I intend to install Proxmox as my Bare-Metal OS into a box that will be my only server. …but ZFS is a subject onto itself. My real question is: Can yo…