If your ZFS pool serves large video files, the best improvements usually come from shaping the media dataset for large sequential reads instead of globally tweaking the whole operating system. This guide shows the safe changes that matter, how to apply them without risking the box, and how to make those changes actually take effect on existing media files.
What this guide is optimizing for
A media library is not a database, a VM store, or a torrent scratch area. The common pattern is
large mostly sequential reads, occasional large writes when you add new content, and metadata work
during library scans. That is why the highest-value changes are dataset-level properties such as
recordsize, atime, compression, and keeping enough free
pool space.
The key safety rule is simple: apply these settings to a dedicated media dataset, not to your root dataset and not to unrelated workloads.
1. Put media on its own dataset first
References: FreeBSD Handbook: ZFS · OpenZFS Workload Tuning
This is the foundation. The FreeBSD Handbook explicitly recommends separate datasets for different kinds of data because datasets inherit properties and can be tuned independently. That is exactly what you want here: one dataset tuned for media streaming, not a one-size-fits-all setting applied across the pool.
# Example: replace tank and /srv/media with your real pool and path
zfs create -o mountpoint=/srv/media tank/mediaIf you already have a media dataset, keep it. The main point is to make sure you are tuning a dedicated media dataset and not mixing media, jails, VM images, databases, and downloads into the same property set.
2. Apply the safe baseline settings that actually help streaming
References: OpenZFS Workload Tuning · OpenZFS zfsprops(7): atime · FreeBSD Handbook: dataset properties and compression
For a dedicated media dataset, this is the safest high-probability baseline:
zfs set compression=lz4 tank/media
zfs set atime=off tank/media
zfs set recordsize=1M tank/mediaWhy these three?
compression=lz4 is worth enabling even for media libraries. The
FreeBSD Handbook says compression often improves throughput because fewer blocks need to be read or
written. OpenZFS also recommends LZ4 broadly because it is fast and often reduces I/O without
becoming a bottleneck.
atime=off prevents ZFS from generating write traffic just because a
file was read. OpenZFS calls this out directly: turning off atime updates minimizes I/O used to
update access timestamps. On a media dataset, that is usually exactly what you want.
recordsize=1M is the big media-specific change. OpenZFS’s workload
tuning guide explicitly recommends recordsize=1M on datasets that are subject to
sequential workloads. Large video files are a textbook example. Larger records reduce metadata
overhead and favor larger I/O during sequential reads.
Important: this will not speed up old files by itself
Changing recordsize only affects files written after the change. OpenZFS is explicit
about this. If you want your existing media files to benefit, you must rewrite or recreate them
under the new dataset settings.
3. Make the changes actually take effect on an existing library
References: OpenZFS Workload Tuning: Dataset recordsize
This is the step many guides skip. If your existing media library was written with default
settings, you will not get the full benefit of recordsize=1M until those files are
rewritten. The safest way is to create a new dataset with the tuned properties, copy the library
into it, verify it, and then switch your media service over.
# 1) Take a rollback point
zfs snapshot -r tank/media@before-streaming-tune
# 2) Create a new tuned dataset
zfs create -o mountpoint=/srv/media.new tank/media_stream
zfs set compression=lz4 tank/media_stream
zfs set atime=off tank/media_stream
zfs set recordsize=1M tank/media_stream
# 3) Copy the files into the new dataset
# rsync is a safe default for most media libraries
rsync -aH --info=progress2 /srv/media/ /srv/media.new/After the copy finishes, stop your media service, point it at the new mountpoint, and verify that playback works before you delete anything. A simple cutover looks like this:
# stop your media service first
zfs set mountpoint=/srv/media.old tank/media
zfs set mountpoint=/srv/media tank/media_stream
# start your media service again and rescan if neededThis gives you the noticeable part of the improvement: your actual media files now live in a dataset written with the right record size and low-overhead read behavior.
4. Keep enough free space in the pool
References: OpenZFS Workload Tuning: Free Space
If your pool is packed too tightly, your tuning work will be fighting the allocator. OpenZFS recommends keeping pool free space above 10% to avoid allocator behavior that gets much more CPU intensive and reduces IOPS as the pool gets crowded.
zpool list -o name,size,alloc,free,cap
If CAP is already above 90%, fix that first. Delete stale content, move cold files
elsewhere, or add capacity. On a near-full pool, streaming performance often feels worse no matter
what dataset properties you set.
5. Two optional settings that can help in the right environment
References: OpenZFS zfsprops(8): xattr=sa · OpenZFS zfsprops(7): logbias
xattr=sa for metadata-heavy media libraries
If your media stack or SMB workflow uses a lot of extended attributes, xattr=sa can
reduce disk I/O by storing those attributes more efficiently. This is more likely to help library
scans and metadata-heavy operations than raw streaming throughput.
zfs set xattr=sa tank/mediaUse this on a dedicated media dataset, and prefer it when you know your workflow actually uses xattrs. It is not the first knob to touch, but it can be worthwhile.
logbias=throughput for bulk media ingest
This one affects write behavior, not reads. If your media dataset mainly sees large file copies, imports, and other throughput-oriented writes, you can tell ZFS to optimize synchronous operations for throughput rather than low-latency log-device behavior.
zfs set logbias=throughput tank/mediaThis is optional. It makes the most sense on a pure media dataset, not on anything that also hosts databases, VM images, or latency-sensitive sync-write workloads.
What not to do
Do not set sync=disabled on a production dataset just because you
saw a benchmark. OpenZFS warns that it is dangerous because it ignores the synchronous write
guarantees applications may rely on.
Do not enable dedup for a media library unless you have already proven it is worth the memory cost. The FreeBSD Handbook warns that deduplication requires a large amount of memory, and explicitly says good practice is to enable compression first because compression provides most of the benefit without the same cost.
Do not blindly copy old loader.conf and sysctl.conf recipes
that cap ARC, change kmem, disable prefetch, or tweak TXG timing globally. The current FreeBSD
Handbook already documents vfs.zfs.txg.timeout with a default of 5 seconds, and it
describes prefetch disabling as something that may help random-read workloads. Media streaming is a
large sequential-read workload, so that is not where I would start.
6. Verify the result instead of assuming it worked
After the cutover, verify both the dataset properties and the actual user-visible outcome.
zfs get atime,compression,recordsize,xattr,logbias,sync tank/media
zpool iostat -v 1During real playback, look for these signs of improvement:
- less write noise during reads because
atimeupdates are gone - smoother sequential throughput from the media dataset during large playback sessions
- better compression ratio on compressible files and less I/O overhead overall
- faster or less choppy library scans if
xattr=sahelps your metadata workload
The most noticeable improvement usually appears after the library has been copied into the new
dataset. Setting the properties is only half the job. Rewriting the files is what makes the
recordsize change real.
Reference links
These are the sources used to build the recommendations in this post.
Frequently Asked Questions
These are the practical questions people usually have when tuning ZFS on FreeBSD for large media files.
Should I tune loader.conf first?
No. For media streaming, dataset properties are the safer and more relevant starting point. Boot-time tunables should only come later and only for a specific measured problem.
Why is recordsize=1M the main recommendation here?
Because OpenZFS explicitly recommends recordsize=1M for sequential workloads, and
large media files are usually read sequentially.
Will changing recordsize fix my current files immediately?
No. The property only affects newly written files. Existing media must be rewritten into a dataset that already has the new setting.
Is compression=lz4 worth enabling for videos?
Yes. Even when the actual media files are not very compressible, LZ4 is fast enough that the I/O reduction usually makes it a good default on a media dataset.
Should I disable prefetch for streaming?
Not as a baseline move. The FreeBSD Handbook describes prefetch disabling as something that can help random-read workloads. Media streaming is usually the opposite.
Should I use dedup on a movie library?
Usually no. Media libraries rarely benefit enough to justify the memory cost, and the FreeBSD Handbook recommends compression first.
What is the easiest way to get a noticeable improvement?
Create a dedicated media dataset, set compression=lz4, atime=off, and
recordsize=1M, then copy the library into that dataset so the new record size actually
applies to the files.
What is the biggest failure-risk setting people copy from the internet?
sync=disabled. It can benchmark well, but OpenZFS is clear that it is dangerous because
it breaks the guarantees that synchronous-write workloads expect.
Comments
Post a Comment