Skip to content

Conversation

@hukongyi
Copy link

@hukongyi hukongyi commented Dec 4, 2025

What this PR does / why we need it?

This PR addresses compatibility issues with upstream vLLM changes regarding SchedulerConfig.

Upstream vLLM PR #29859 changed is_encoder_decoder and max_model_len in SchedulerConfig to be InitVars without default values. This caused two issues in vllm-ascend:

  1. Unit Tests Failure: Direct instantiation of SchedulerConfig in tests/ut/core/test_schedule_config.py failed due to missing required arguments.
  2. Runtime AttributeError: The initialize_from_config method in vllm_ascend/core/schedule_config.py used getattr on InitVar fields (which are not stored as instance attributes), leading to AttributeError crashes.

Changes:

  1. UT Fix: Updated tests/ut/core/test_schedule_config.py to explicitly pass is_encoder_decoder=False during initialization.
  2. Logic Fix: Modified AscendSchedulerConfig.initialize_from_config to safely handle InitVar fields. It now catches AttributeError when accessing fields not stored on the instance and provides default values for required arguments (is_encoder_decoder=False, max_model_len=8192) when reconstructing the config.

Does this PR introduce any user-facing change?

No. This fixes internal compatibility issues to ensure tests and initialization work correctly with the latest vLLM version.

@github-actions
Copy link

github-actions bot commented Dec 4, 2025

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix compatibility issues with an upstream change in vLLM's SchedulerConfig. The changes in the test files correctly adapt to the new required arguments. However, the fix in vllm_ascend/core/schedule_config.py introduces a significant issue by hardcoding max_model_len. This is a critical model-dependent parameter, and hardcoding it can lead to incorrect scheduler behavior and bugs that are hard to trace. I've left a critical comment detailing this problem. The rest of the changes, including minor style improvements, look good.

Comment on lines 56 to 57
if "max_model_len" not in scheduler_config:
scheduler_config["max_model_len"] = 8192
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Hardcoding max_model_len to 8192 is dangerous. This value is a critical, model-dependent parameter and should be derived from the actual model's configuration, not assumed. Using a fixed value can lead to subtle bugs that are difficult to debug, such as:

  • Incorrectly rejecting valid prompts that are longer than the hardcoded value but shorter than the model's actual max_model_len.
  • Incorrect scheduler logic, for example in __post_init__ where max_model_len is used in comparisons and calculations (e.g., for long_prefill_token_threshold).

This hardcoded value makes the system brittle and will fail for models with different context lengths. The correct max_model_len should be sourced from a reliable configuration object or passed into this function directly, rather than falling back to a magic number.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. I have updated the initialize_from_config signature to accept max_model_len and is_encoder_decoder as arguments explicitly, ensuring the correct model config is used.

@wangxiyuan
Copy link
Collaborator

for scheduler part, we decide to remove ascend schduler #4623
can you fix other part only? Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants