Skip to content

Conversation

@blob42
Copy link

@blob42 blob42 commented Nov 27, 2025

Adds a new command-line argument --total-ram to limit the amount of
system RAM that ComfyUI considers available, allowing users to simulate
lower memory environments. This enables more predictable behavior when
testing or running on systems with limited resources.

Rationale:

I run Comfy inside a Docker container. Using mem_limit doesn't hide
total system RAM from Comfy, so OOM can occur easily. Cache pressure
limits cause frequent out-of-memory errors. Adding this flag allows
precise control over visible memory.

@blob42 blob42 force-pushed the feat-set-visible-ram branch 2 times, most recently from af33ef3 to 34b0671 Compare November 27, 2025 15:26
@blob42 blob42 changed the title add --total-ram option for controlling visible system RAM in Comfy add --total-ram flag for controlling visible system RAM Nov 27, 2025
@blob42 blob42 force-pushed the feat-set-visible-ram branch from 34b0671 to e90e9b7 Compare November 27, 2025 15:28
Adds a new command-line argument `--total-ram` to limit the amount of
system RAM that ComfyUI considers available, allowing users to simulate
lower memory environments. This enables more predictable behavior when
testing or running on systems with limited resources.

Rationale:

I run Comfy inside a Docker container. Using `mem_limit` doesn't hide
total system RAM from Comfy, so OOM can occur easily. Cache pressure
limits cause frequent out-of-memory errors. Adding this flag allows
precise control over visible memory.

Signed-off-by: blob42 <[email protected]>
@blob42 blob42 force-pushed the feat-set-visible-ram branch from e90e9b7 to 6f4d889 Compare November 27, 2025 15:30
@blob42 blob42 marked this pull request as ready for review November 27, 2025 15:31
@blob42 blob42 requested a review from Kosinkadink as a code owner November 27, 2025 15:31
@rattus128
Copy link
Contributor

rattus128 commented Nov 27, 2025

If you are referring to the --ram-cache feature doesn't use this value in the code you change for its logic so it wont help if that's what you mean by "cache pressure limits".

If you are using --ram-cache and devving for it I strongly suggest grabbing:

#10779

Solving the docker problem is still on the todo but I would get this phase 2 as it's a lot better behaved in general.

Regarding your change, it doesn't affect a huge amount, but it does potentially affect the pinning logic and the scenario you describe could cause the pinner to think it has more RAM than it does.

If this is ultimately helping you there is a bit of a question as to how and why if you actually have an ENOMEM crash log from comfy that would help.

Can you try with and without --disable-pinned-memory for your use case?

@blob42
Copy link
Author

blob42 commented Nov 30, 2025

@rattus128 Solving the issue #10779 would be indeed helpful.

Regarding my changes, I wanted a way to guarantee that Comfy sees a defined amount of RAM. It avoids having to find out cryptic flags to switch off like the ram pinning you mentioned. It also guarantees that any RAM optimization related features will work within a confined visible amount of RAM.

I agree though that my change is a naive approach and does not influence the libraries. The problem should be solved at the system/docker level. I will investigate other solutions and document this issue.

In the meantime, since I am using this flag I stopped getting any OOM related issues

@rattus128
Copy link
Contributor

rattus128 commented Nov 30, 2025

@rattus128 Solving the issue #10779 would be indeed helpful.

Regarding my changes, I wanted a way to guarantee that Comfy sees a defined amount of RAM. It avoids having to find out cryptic flags to switch off like the ram pinning you mentioned. It also guarantees that any RAM optimization related features will work within a confined visible amount of RAM.

I agree though that my change is a naive approach and does not influence the libraries. The problem should be solved at the system/docker level. I will investigate other solutions and document this issue.

In the meantime, since I am using this flag I stopped getting any OOM related issues

Ok that confirms you need accurate RAM spec for the sake of pinning. Disabled pinned memory is not the right answer for your use case, it just to confirm the detailed root cause of your problem.

That said this is still theoretically hard to reproduce as comfy buffers the entire model in RAM with a deep copy (so it's all prefaulted) regardless of this flag or what the cache does. Do you have a big swap area in play on the docker host that is being heavily used? What's your host RAM, VRAM, swap config, docker RAM limit and what big models is your workflow using?

We could unify your flag with the RAM cache implementation given you have found a second reason to do this.

When you use you flag do you give it entire docker container RAM allocation or do you give it a fair bit less to stabilize?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants