Skip to content

Conversation

@ademeure
Copy link
Contributor

@ademeure ademeure commented Oct 2, 2024

This keeps residual3 for all layers, and then up to N layers for everything else, with relatively little complexity...

This means if you set "-ac 16", it will only recompute 50% of activations with a 32 layer model, and 75% of activations with a 64 layer model. The value must be a multiple of the number of layers in the model. It can be combined with "-r 1" and "-r 2", e.g. "-ac 16 -r 2" is faster than "-ac 1 -r 0" and lower memory than "-ac 16 -r 0".

This is not the absolute minimum memory strategy: with e.g. "-ac 4", we still store every residual3, even though we could only store 1 in 4, but that would be more complicated because it requires also temporarily storing an extra 3 or 4 residuals somewhere (the ones that are being recomputed). More importantly, this allows us to not recompute one of the 2 big matmuls, so it's a very attractive performance vs memory trade-off.

However, for Llama3 405B with a context length of 128K, that's 4GiB per layer, or 504GiB for all layers, which obviously doesn't fit on a single GPU, not even on a GH200... so we will probably need that extra complexity sooner rather than later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant