-
Notifications
You must be signed in to change notification settings - Fork 645
[BugFix] Fix SchedulerConfig initialization compatibility with vLLM upstream #4689
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: hukongyi <[email protected]>
Signed-off-by: hukongyi <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to fix compatibility issues with an upstream change in vLLM's SchedulerConfig. The changes in the test files correctly adapt to the new required arguments. However, the fix in vllm_ascend/core/schedule_config.py introduces a significant issue by hardcoding max_model_len. This is a critical model-dependent parameter, and hardcoding it can lead to incorrect scheduler behavior and bugs that are hard to trace. I've left a critical comment detailing this problem. The rest of the changes, including minor style improvements, look good.
vllm_ascend/core/schedule_config.py
Outdated
| if "max_model_len" not in scheduler_config: | ||
| scheduler_config["max_model_len"] = 8192 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoding max_model_len to 8192 is dangerous. This value is a critical, model-dependent parameter and should be derived from the actual model's configuration, not assumed. Using a fixed value can lead to subtle bugs that are difficult to debug, such as:
- Incorrectly rejecting valid prompts that are longer than the hardcoded value but shorter than the model's actual
max_model_len. - Incorrect scheduler logic, for example in
__post_init__wheremax_model_lenis used in comparisons and calculations (e.g., forlong_prefill_token_threshold).
This hardcoded value makes the system brittle and will fail for models with different context lengths. The correct max_model_len should be sourced from a reliable configuration object or passed into this function directly, rather than falling back to a magic number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed. I have updated the initialize_from_config signature to accept max_model_len and is_encoder_decoder as arguments explicitly, ensuring the correct model config is used.
Signed-off-by: hukongyi <[email protected]>
|
for scheduler part, we decide to remove ascend schduler #4623 |
What this PR does / why we need it?
This PR addresses compatibility issues with upstream vLLM changes regarding
SchedulerConfig.Upstream vLLM PR #29859 changed
is_encoder_decoderandmax_model_leninSchedulerConfigto beInitVars without default values. This caused two issues invllm-ascend:SchedulerConfigintests/ut/core/test_schedule_config.pyfailed due to missing required arguments.initialize_from_configmethod invllm_ascend/core/schedule_config.pyusedgetattronInitVarfields (which are not stored as instance attributes), leading toAttributeErrorcrashes.Changes:
tests/ut/core/test_schedule_config.pyto explicitly passis_encoder_decoder=Falseduring initialization.AscendSchedulerConfig.initialize_from_configto safely handleInitVarfields. It now catchesAttributeErrorwhen accessing fields not stored on the instance and provides default values for required arguments (is_encoder_decoder=False,max_model_len=8192) when reconstructing the config.Does this PR introduce any user-facing change?
No. This fixes internal compatibility issues to ensure tests and initialization work correctly with the latest vLLM version.