Skip to content

Conversation

@HammyHavoc
Copy link
Contributor

Thought info on ZFS was sorely lacking and is something I've been experimenting with due to there not being much information available on the topic. :- )

Thought this was sorely lacking and is something I've been experimenting with due to there not being much information available on the topic.
Added warning about ZFS and mechanical drives.
Improved readability.
Clarified the reason as to why it is important.
Improved readability for non-English speaking users.
Improved readability. :- )
Improved clarity.
Improved clarity.
Copy link
Member

@BotBlake BotBlake left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi there!
Sorry for the long wait.

I have marked some minor issues in the review.
Otherwise this PR looks good to me 👍

Additionally we seem to be having some issues with the CI.
I will look into that and report back asap.

Kind regards.


The [transcoding](/docs/general/post-install/transcoding) folder needs roughly the same size as the original media when transcoded at the same bitrate. A single 50 GB Blu-Ray remux by itself can take up to ~60 GB or as little as ~15 GB. If the transcoding folder is held on the same storage as the database, this must be taken into consideration to prevent running out of storage and thus corrupting your database.

For storage, a moderate size library database can grow anywhere from 10 to 100 GB. The [transcoding](/docs/general/post-install/transcoding) folder needs roughly the same size as the original media if it's being transcoded at the same bitrate. A single 50GB Blu-Ray Remux by itself can take up to approximately 60GB or as little as 15GB, depending on the quality selected. If the transcoding folder is held on the same storage as the database, this must be taken into consideration.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you remove this?
I still think it is valuable information to take into consideration before migrating data to "the cloud".
Please elaborate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because it's relative. How big is "moderate"? Why does it differ between 10GB and 100GB? It's a very poor guesstimate without any actual logic or examples cited. Happy to add it back in if someone has stats. Otherwise, it is like saying "people have between 5GB and 80GB of photographs of their grandmother on their phone".

Copy link
Member

@JPVenson JPVenson Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thats a bit of the point. We cannot tell you. We usually tell people (and i think its also somewhere else in the docs) as a rule of thumb:
(Number of users) x (Biggest File) x 1.2
As the size for the transcode folder.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I thought you were remarking about the removal of For storage, a moderate size library database can grow anywhere from 10 to 100 GB.

Copy link
Contributor Author

@HammyHavoc HammyHavoc Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we both have our wires crossed. I'm somewhat distracted at this present moment; final radiotherapy session for in-law.

Please do feel free to make suggestions and I'll be happy to wave them through for the sake of getting the documentation updated.


N.b.: Changing the record size on an existing ZFS dataset will not change the existing data within it, meaning performance will not be any different for anything but newly-written changes into the dataset. As such, it is recommended to rewrite your data into the dataset to take advantage of the change in record size, otherwise the configuration change will not yield the expected change in performance.

As ZFS snapshots can use a lot of storage over time without a sensible `destroy` schedule, there may be a temptation to keep your data on a mechanical drive instead of an SSD. Do NOT use ZFS-formatted mechanical drives to store your Jellyfin Server data (everything except your media files), or you will have terrible performance. SSD is absolutely necessary.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does the same section have info about both SSD vs HDD and snapshots?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because it is general insight about using ZFS with Jellyfin to prevent people going astray. Suggestions welcome.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least split the don't use HDDs and snapshots take up a lot of space into 2 sections.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, we already recommend against using HDDs for Jellyfin elsewhere in the docs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable suggestions welcome.

@jellyfin-bot
Copy link

Cloudflare Pages deployment

Latest commit 40916c8547531c1040420763caee3f3899e66a1f
Status ✅ Deployed!
Preview URL https://b5aa2c13.jellyfin-org.pages.dev
Type 🔀 Preview

Copy link
Member

@BotBlake BotBlake left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi there!
Only one suggestion for change.
Otherwise this looks good to me!

Thanks for the changes and kind regards.


A database for a moderate-sized library can grow anywhere from 10 to 100 GB.

The [transcoding](/docs/general/post-install/transcoding) folder needs roughly the same size as the original media when transcoded at the same bitrate. A single 50 GB Blu-Ray remux by itself can take up to ~60 GB or as little as ~15 GB. If the transcoding folder is held on the same storage as the database, this must be taken into consideration to prevent running out of storage and thus corrupting your database.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Woah. Its new info to me that a full transcoding folder would cause db corruptions 😓

Suggested change
The [transcoding](/docs/general/post-install/transcoding) folder needs roughly the same size as the original media when transcoded at the same bitrate. A single 50 GB Blu-Ray remux by itself can take up to ~60 GB or as little as ~15 GB. If the transcoding folder is held on the same storage as the database, this must be taken into consideration to prevent running out of storage and thus corrupting your database.
The [transcoding](/docs/general/post-install/transcoding) folder typically requires about the same amount of space as the original media when transcoded at an equivalent bitrate. For example, a single 50 GB Blu-ray remux might consume as much as ~60 GB or as little as ~15 GB after transcoding. If the transcoding folder shares the same storage as the database, this should be accounted for to avoid any problems.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only a risk if it shares the same storage medium with your database and you're not using a scratch disk or something like a Fusion-io flash array (although that's semi-oldskool these days I must say).

Hope you're well! :- )

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have had issues with that before, because my transcoding cache and my database are on the same ZFS Pool.
All it caused was Jellyfin not starting up properly.
No data lost though.

I'm doing good yea, thanks for asking.
How r u?
Btw, you can also join our documentation chat on matrix or discord if you want 👍
https://jellyfin.org/contact/

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi there @HammyHavoc
Could you please elaborate?
Thank you and kind regards.

Copy link
Member

@BotBlake BotBlake left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pressed the wrong button on the previous review 😆

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants