Skip to content

Conversation

@hmellor
Copy link
Member

@hmellor hmellor commented Dec 2, 2025

The default values have been moved to a factory lambda that's used by VllmConfig so that VllmConfig can still be default constructed without error.

Replaces the hotfix in #29771

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors SchedulerConfig by removing default values from InitVar fields, moving them to a factory lambda used by VllmConfig. This change is intended to prevent the default values from being stored while allowing VllmConfig to be default-constructed. The changes are logical and well-contained. I have one suggestion to improve the code style by replacing the lambda assignment with a standard function definition, following PEP 8 guidelines for better readability and maintainability.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@DarkLight1337
Copy link
Member

Does moving the defaults into __post_init__ not work?

@hmellor
Copy link
Member Author

hmellor commented Dec 2, 2025

Does moving the defaults into __post_init__ not work?

If we remove the default values from the InitVar definition, they become positional arguments. I've moved the logic so that the default construction values still belong to the SchedulerConfig

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM then, thanks

@github-project-automation github-project-automation bot moved this to In review in NVIDIA Dec 2, 2025
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) December 2, 2025 09:51
@DarkLight1337 DarkLight1337 added this to the v0.12.0 milestone Dec 2, 2025
@hmellor
Copy link
Member Author

hmellor commented Dec 2, 2025

Here's a little script you can run to validate the expected behaviour:

from pydantic import ValidationError
from vllm.config import SchedulerConfig

try:
    scheduler_config = SchedulerConfig()
except ValidationError as e:
    # Positional InitVars were missing
    print(f"ValidationError: {e}")

try:
    scheduler_config = SchedulerConfig.default_factory()
    scheduler_config.max_model_len
except AttributeError as e:
    # InitVar does not become an attribute
    print(f"AttributeError: {e}")

Both prints should execute if the SchedulerConfig behaves as expected.

@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Dec 2, 2025
Signed-off-by: Harry Mellor <[email protected]>
@hmellor
Copy link
Member Author

hmellor commented Dec 2, 2025

I turned that little script into a test

@DarkLight1337 DarkLight1337 merged commit 951445a into vllm-project:main Dec 2, 2025
56 checks passed
@github-project-automation github-project-automation bot moved this from In review to Done in NVIDIA Dec 2, 2025
@hmellor hmellor deleted the fix-scheduler-config branch December 2, 2025 15:22
khluu pushed a commit that referenced this pull request Dec 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

kv-connector nvidia performance Performance-related issues ready ONLY add when PR is ready to merge/full CI is needed speculative-decoding tpu Related to Google TPUs v1

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

2 participants