Skip to content

Conversation

@rchardx
Copy link
Collaborator

@rchardx rchardx commented Dec 5, 2025

Description

Updates the gradient clipping implementation to correctly handle parameters offloaded to CPU, bypassing CUDA-specific optimizations when necessary to prevent runtime errors. Refactors the FSDP engine's weight broadcasting logic to properly materialize and batch DTensors in offloaded scenarios. Additionally, introduces a new test suite to verify gradient normalization and clipping behavior across different device configurations.

Related Issue

Fixes #644.

In addition, this PR resolves the root cause of #677.

Type of Change

  • Bug fix (non-breaking change that fixes an issue)
  • New feature (non-breaking change that adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not
    work as expected)
  • Documentation update
  • Code refactoring (no functional changes)
  • Performance improvement
  • Test coverage improvement

Checklist

  • I have read the Contributing Guide
  • I have run formatting tools (pre-commit or manual)
  • I have run relevant unit tests and they pass
  • I have added tests for new functionality
  • I have updated documentation if needed
  • My branch is up to date with main
  • This PR introduces breaking changes (if yes, fill out details below)
  • If this PR changes documentation, I have built and previewed it locally with
    jb build docs
  • No critical issues raised by AI reviewers (/gemini review)

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @rchardx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces crucial enhancements to the FSDP engine, primarily focusing on robust support for CPU offloading. It ensures that distributed tensors (DTensors) that are offloaded to the CPU are correctly handled during the weight synchronization process, preventing potential issues with materialization. Furthermore, the PR optimizes memory usage during distributed weight updates by refactoring the parameter gathering mechanism into a batched approach. Finally, it extends the gradient norm calculation and clipping functionalities to seamlessly operate with gradients that reside on the CPU, thereby improving the overall stability and flexibility of the FSDP implementation.

Highlights

  • CPU Offload for FSDP: The FSDP engine now correctly handles DTensors residing on CPU during weight synchronization, ensuring proper materialization to full tensors.
  • Weight Gathering Refactoring: The weight gathering logic has been refactored to batch parameter processing, significantly improving memory efficiency during distributed updates.
  • Gradient Norm Calculation and Clipping: Gradient norm calculation and clipping functions have been modified to fully support CPU-resident gradients, ensuring operations are applied to the correct device types.
  • Issue Resolution: This pull request directly fixes issue [BUG] No backend type associated with device type cpu #644 and resolves the root cause of issue Fix illegal memory access when compute grad norm for fsdp. #677.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces important fixes for FSDP CPU offloading. The changes correctly handle CPU-resident DTensors for weight synchronization and refactor weight gathering for better memory efficiency. The gradient norm calculation and clipping functions are also updated to support CPU-resident gradients. My review identified two critical bugs in areal/utils/fsdp/grad.py that will cause runtime errors due to incorrect tensor-to-scalar conversion. I have also noted a performance regression in the gradient norm calculation for non-offloaded gradients and provided suggestions for a fix. The other changes are well-implemented and align with the goals of the pull request.

@rchardx rchardx changed the title fix: fix CPU offload for FSDP fix: fix CPU offloading in FSDP grad clipping and weight updates Dec 5, 2025
@rchardx rchardx added the safe-to-test Ready to run unit-tests in a PR. label Dec 5, 2025
Updates the gradient clipping implementation to correctly handle parameters offloaded to CPU, bypassing CUDA-specific optimizations when necessary to prevent runtime errors. Refactors the FSDP engine's weight broadcasting logic to properly materialize and batch DTensors in offloaded scenarios. Additionally, introduces a new test suite to verify gradient normalization and clipping behavior across different device configurations.
@rchardx rchardx added safe-to-test Ready to run unit-tests in a PR. and removed safe-to-test Ready to run unit-tests in a PR. labels Dec 6, 2025
@rchardx rchardx deployed to AReaL-unittests December 6, 2025 08:31 — with GitHub Actions Active
Copy link
Collaborator

@fishcrap fishcrap left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@rchardx rchardx merged commit df6bd8f into main Dec 8, 2025
9 of 10 checks passed
@rchardx rchardx deleted the rchardx/offload branch December 8, 2025 03:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

safe-to-test Ready to run unit-tests in a PR.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] No backend type associated with device type cpu

3 participants