Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 29 additions & 9 deletions docs/general/administration/storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,9 @@ uid: server-storage
title: Storage
---

## Storage
Jellyfin is designed to directly read media from a filesystem. A network storage device using SMB or NFS must be directly mounted to the OS. The Jellyfin Server database should also be stored locally and not on a network storage device for acceptable performance.

Jellyfin is designed to directly read media from the filesystem. A network storage device that is using samba or NFS must be directly mounted to the OS. The Jellyfin database should also be stored locally and not on a network storage device.

### NFS
## NFS

In case you encounter performance issues where files take a long time to start playing while using NFSv3, you might be running in an issue with .NET locking without NFSv3 having locking enabled.

Expand All @@ -18,18 +16,40 @@ To solve this, you have the following options:
- Enable the lock service.
- Use NFSv4 which has built-in lock support.

## Docker or VMs
## Docker and VMs

A database for a moderate-sized library can grow anywhere from 10 to 100 GB.

The [transcoding](/docs/general/post-install/transcoding) folder needs roughly the same size as the original media when transcoded at the same bitrate. A single 50 GB Blu-Ray remux by itself can take up to ~60 GB or as little as ~15 GB. If the transcoding folder is held on the same storage as the database, this must be taken into consideration to prevent running out of storage and thus corrupting your database.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Woah. Its new info to me that a full transcoding folder would cause db corruptions 😓

Suggested change
The [transcoding](/docs/general/post-install/transcoding) folder needs roughly the same size as the original media when transcoded at the same bitrate. A single 50 GB Blu-Ray remux by itself can take up to ~60 GB or as little as ~15 GB. If the transcoding folder is held on the same storage as the database, this must be taken into consideration to prevent running out of storage and thus corrupting your database.
The [transcoding](/docs/general/post-install/transcoding) folder typically requires about the same amount of space as the original media when transcoded at an equivalent bitrate. For example, a single 50 GB Blu-ray remux might consume as much as ~60 GB or as little as ~15 GB after transcoding. If the transcoding folder shares the same storage as the database, this should be accounted for to avoid any problems.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only a risk if it shares the same storage medium with your database and you're not using a scratch disk or something like a Fusion-io flash array (although that's semi-oldskool these days I must say).

Hope you're well! :- )

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have had issues with that before, because my transcoding cache and my database are on the same ZFS Pool.
All it caused was Jellyfin not starting up properly.
No data lost though.

I'm doing good yea, thanks for asking.
How r u?
Btw, you can also join our documentation chat on matrix or discord if you want 👍
https://jellyfin.org/contact/

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi there @HammyHavoc
Could you please elaborate?
Thank you and kind regards.


For storage, a moderate size library database can grow anywhere from 10 to 100 GB. The [transcoding](/docs/general/post-install/transcoding) folder needs roughly the same size as the original media if it's being transcoded at the same bitrate. A single 50GB Blu-Ray Remux by itself can take up to approximately 60GB or as little as 15GB, depending on the quality selected. If the transcoding folder is held on the same storage as the database, this must be taken into consideration.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you remove this?
I still think it is valuable information to take into consideration before migrating data to "the cloud".
Please elaborate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because it's relative. How big is "moderate"? Why does it differ between 10GB and 100GB? It's a very poor guesstimate without any actual logic or examples cited. Happy to add it back in if someone has stats. Otherwise, it is like saying "people have between 5GB and 80GB of photographs of their grandmother on their phone".

Copy link
Member

@JPVenson JPVenson Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thats a bit of the point. We cannot tell you. We usually tell people (and i think its also somewhere else in the docs) as a rule of thumb:
(Number of users) x (Biggest File) x 1.2
As the size for the transcode folder.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I thought you were remarking about the removal of For storage, a moderate size library database can grow anywhere from 10 to 100 GB.

Copy link
Contributor Author

@HammyHavoc HammyHavoc Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we both have our wires crossed. I'm somewhat distracted at this present moment; final radiotherapy session for in-law.

Please do feel free to make suggestions and I'll be happy to wave them through for the sake of getting the documentation updated.

## Cloud Storage Providers

## Cloud
[rclone](https://rclone.org/downloads/) is a popular choice for integrating cloud storage with a Jellyfin Server. rclone is supported on most operating systems. To combine local and cloud filesystems, rclone can be paired with another program such as [mergerfs](https://github.com/trapexit/mergerfs).

A popular choice for cloud storage has been the program [rclone](https://rclone.org/downloads/). It is supported on most Operating Systems. To facilitate combining local and cloud filesystems, rclone can be paired with another program such as [mergerfs](https://github.com/trapexit/mergerfs). For cloud storage, it is recommended to disable image extraction as this requires downloading the entire file to perform this task.
When using cloud storage, it is recommended to disable image extraction as it requires downloading the entire file.

### MergerFS
## MergerFS

MergerFS isn't meant for everything, [see here](https://github.com/trapexit/mergerfs#what-should-mergerfs-not-be-used-for) for more.

- rclone recommended [config](https://forum.rclone.org/t/my-best-rclone-config-mount-for-plex/7441).

To modify and examine your mergerfs mount, here's a quick [guide](https://zackreed.me/mergerfs-neat-tricks).

## Filesystem Considerations

For certain filesystems, optimizations are highly recommended for acceptable performance.

### ZFS

Whilst development is being done on further database providers, in the current implementation of Jellyfin Server, the database uses SQLite. ZFS uses a default record size of `128 K`. This is sub-optimal for the SQLite database.

Ideally, you should use a record size of `4 K` or `8 K` on the dataset that contains your Jellyfin Server SQLite database. This is easily configured when running Jellyfin Server within a Docker container as you are able to easily change bind mounts and can set various datasets for each path as appropriate.

The record size for your media file dataset(s) must not be using `4 K` or `8 K`, otherwise you will likely encounter performance issues as your database scales.

For ZFS datasets containing large media files (i.e., not the dataset containing the Jellyfin Server SQLite database), a record size of `1 M` is likely appropriate for optimal performance.

Note that changing the record size on an existing ZFS dataset will not change the existing data within it, meaning performance will not be any different for anything but newly-written changes into the dataset. As such, it is recommended to rewrite your data into the dataset to take advantage of the change in record size; otherwise, the configuration change will not yield the expected change in performance.

As ZFS snapshots can use a lot of storage over time without a sensible `destroy` schedule, there may be a temptation to keep your data on a mechanical drive instead of an SSD. Do not use ZFS-formatted mechanical drives to store your Jellyfin Server data (everything except your media files), as this will result in poor performance. An SSD is strongly recommended.