-
- Notifications
You must be signed in to change notification settings - Fork 12.1k
Fix changes the behavior from overriding kv_connector_extra_config to updating #30856
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Fix changes the behavior from overriding kv_connector_extra_config to updating #30856
Conversation
| Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly addresses a configuration override issue for the LMCache connector. The change modifies the population of kv_connector_extra_config from a direct assignment to an update, which prevents user-provided settings from being discarded. This is a necessary bug fix that enables users to customize the LMCache connector's behavior, such as enabling the native implementation. The implementation is clean, correct, and consistent with how other backends are configured in the same file. I approve this change.
| 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run You ask your reviewers to trigger select CI tests on top of Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add If you have any questions, please reach out to us on Slack at https://slack.vllm.ai. 🚀 |
| Hi @akakakakakaa, the pre-commit checks have failed. Please run: uv pip install pre-commit pre-commit install pre-commit run --all-filesThen, commit the changes and push to your branch. For future commits, Tip Is |
yewentao256 left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for the work!
yewentao256 left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please merge from main and fix the DCO issue
45565ea to 55affe0 Compare | Documentation preview: https://vllm--30856.org.readthedocs.build/en/30856/ |
| This pull request has merge conflicts that must be resolved before it can be |
55affe0 to 45565ea Compare | Sorry for my mistake. I'll fix it |
Signed-off-by: akakakakakaa <akstn3023@naver.com>
Signed-off-by: akakakakakaa <akstn3023@naver.com>
45565ea to 8ecbe3a Compare
When using pooling models together with LMCache, the system crashes due to a missing guard in the LMCache implementation: it does not check whether sampling_params is None. Pooling models do not use sampling parameters, so this leads to a runtime error. (LMCache implementation, vLLM native implementation (maybe works correctly))
To work around this, I attempted to use vLLM’s native LMCache implementation, which already handles this case correctly. However, there is a configuration issue that prevents enabling the native implementation because of overriding.
So, the fix changes the behavior from overriding kv_connector_extra_config to updating it, ensuring that user-provided options such as use_native are preserved.