-
-
Notifications
You must be signed in to change notification settings - Fork 521
Updated storage.md with information about ZFS
#1513
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Thought this was sorely lacking and is something I've been experimenting with due to there not being much information available on the topic.
Added warning about ZFS and mechanical drives.
Improved readability.
Clarified the reason as to why it is important.
Improved readability for non-English speaking users.
Improved readability. :- )
Improved clarity.
Improved clarity.
BotBlake
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi there!
Sorry for the long wait.
I have marked some minor issues in the review.
Otherwise this PR looks good to me 👍
Additionally we seem to be having some issues with the CI.
I will look into that and report back asap.
Kind regards.
|
|
||
| The [transcoding](/docs/general/post-install/transcoding) folder needs roughly the same size as the original media when transcoded at the same bitrate. A single 50 GB Blu-Ray remux by itself can take up to ~60 GB or as little as ~15 GB. If the transcoding folder is held on the same storage as the database, this must be taken into consideration to prevent running out of storage and thus corrupting your database. | ||
|
|
||
| For storage, a moderate size library database can grow anywhere from 10 to 100 GB. The [transcoding](/docs/general/post-install/transcoding) folder needs roughly the same size as the original media if it's being transcoded at the same bitrate. A single 50GB Blu-Ray Remux by itself can take up to approximately 60GB or as little as 15GB, depending on the quality selected. If the transcoding folder is held on the same storage as the database, this must be taken into consideration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did you remove this?
I still think it is valuable information to take into consideration before migrating data to "the cloud".
Please elaborate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because it's relative. How big is "moderate"? Why does it differ between 10GB and 100GB? It's a very poor guesstimate without any actual logic or examples cited. Happy to add it back in if someone has stats. Otherwise, it is like saying "people have between 5GB and 80GB of photographs of their grandmother on their phone".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thats a bit of the point. We cannot tell you. We usually tell people (and i think its also somewhere else in the docs) as a rule of thumb:
(Number of users) x (Biggest File) x 1.2
As the size for the transcode folder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I thought you were remarking about the removal of For storage, a moderate size library database can grow anywhere from 10 to 100 GB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we both have our wires crossed. I'm somewhat distracted at this present moment; final radiotherapy session for in-law.
Please do feel free to make suggestions and I'll be happy to wave them through for the sake of getting the documentation updated.
|
|
||
| N.b.: Changing the record size on an existing ZFS dataset will not change the existing data within it, meaning performance will not be any different for anything but newly-written changes into the dataset. As such, it is recommended to rewrite your data into the dataset to take advantage of the change in record size, otherwise the configuration change will not yield the expected change in performance. | ||
|
|
||
| As ZFS snapshots can use a lot of storage over time without a sensible `destroy` schedule, there may be a temptation to keep your data on a mechanical drive instead of an SSD. Do NOT use ZFS-formatted mechanical drives to store your Jellyfin Server data (everything except your media files), or you will have terrible performance. SSD is absolutely necessary. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does the same section have info about both SSD vs HDD and snapshots?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because it is general insight about using ZFS with Jellyfin to prevent people going astray. Suggestions welcome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At least split the don't use HDDs and snapshots take up a lot of space into 2 sections.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, we already recommend against using HDDs for Jellyfin elsewhere in the docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable suggestions welcome.
Co-authored-by: BotBlake <[email protected]>
Co-authored-by: BotBlake <[email protected]>
Co-authored-by: BotBlake <[email protected]>
Co-authored-by: BotBlake <[email protected]>
Co-authored-by: BotBlake <[email protected]>
Cloudflare Pages deployment
|
BotBlake
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi there!
Only one suggestion for change.
Otherwise this looks good to me!
Thanks for the changes and kind regards.
|
|
||
| A database for a moderate-sized library can grow anywhere from 10 to 100 GB. | ||
|
|
||
| The [transcoding](/docs/general/post-install/transcoding) folder needs roughly the same size as the original media when transcoded at the same bitrate. A single 50 GB Blu-Ray remux by itself can take up to ~60 GB or as little as ~15 GB. If the transcoding folder is held on the same storage as the database, this must be taken into consideration to prevent running out of storage and thus corrupting your database. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Woah. Its new info to me that a full transcoding folder would cause db corruptions 😓
| The [transcoding](/docs/general/post-install/transcoding) folder needs roughly the same size as the original media when transcoded at the same bitrate. A single 50 GB Blu-Ray remux by itself can take up to ~60 GB or as little as ~15 GB. If the transcoding folder is held on the same storage as the database, this must be taken into consideration to prevent running out of storage and thus corrupting your database. | |
| The [transcoding](/docs/general/post-install/transcoding) folder typically requires about the same amount of space as the original media when transcoded at an equivalent bitrate. For example, a single 50 GB Blu-ray remux might consume as much as ~60 GB or as little as ~15 GB after transcoding. If the transcoding folder shares the same storage as the database, this should be accounted for to avoid any problems. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only a risk if it shares the same storage medium with your database and you're not using a scratch disk or something like a Fusion-io flash array (although that's semi-oldskool these days I must say).
Hope you're well! :- )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have had issues with that before, because my transcoding cache and my database are on the same ZFS Pool.
All it caused was Jellyfin not starting up properly.
No data lost though.
I'm doing good yea, thanks for asking.
How r u?
Btw, you can also join our documentation chat on matrix or discord if you want 👍
https://jellyfin.org/contact/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi there @HammyHavoc
Could you please elaborate?
Thank you and kind regards.
BotBlake
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pressed the wrong button on the previous review 😆
Thought info on ZFS was sorely lacking and is something I've been experimenting with due to there not being much information available on the topic. :- )