Skip to content

Conversation

@DarkLight1337
Copy link
Member

@DarkLight1337 DarkLight1337 commented Dec 2, 2025

Purpose

Now that we have standardized the Tokenizer interface, we can replace encode_tokens and decode_tokens with tokenizer.encode and tokenizer.decode respectively.

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Dec 2, 2025
@mergify mergify bot added frontend llama Related to Llama models multi-modality Related to multi-modality (#4194) qwen Related to Qwen models labels Dec 2, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a good refactoring effort to standardize the tokenizer interface by replacing encode_tokens and decode_tokens with direct calls to tokenizer.encode and tokenizer.decode. The changes are mostly correct and the deprecation of the old functions is appropriate. However, I've identified a critical bug in one of the test files where an incorrect number of arguments is passed to tokenizer.encode, which will cause the test to fail.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Signed-off-by: DarkLight1337 <[email protected]>
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) December 2, 2025 09:10
@DarkLight1337 DarkLight1337 added this to the v0.12.0 milestone Dec 2, 2025
@DarkLight1337 DarkLight1337 merged commit 68ffbca into vllm-project:main Dec 2, 2025
56 checks passed
@DarkLight1337 DarkLight1337 deleted the deprecate-encode-decode branch December 2, 2025 12:30
khluu pushed a commit that referenced this pull request Dec 2, 2025
Signed-off-by: DarkLight1337 <[email protected]>
(cherry picked from commit 68ffbca)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

frontend llama Related to Llama models multi-modality Related to multi-modality (#4194) qwen Related to Qwen models ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants