Skip to content

Conversation

@ivanium
Copy link
Contributor

@ivanium ivanium commented Dec 6, 2025

Purpose

This is a contiuation work along PR #23624 to support hybrid KV cache manager + KV cache connector.

Design doc with details drafted by @KuntaiDu: link

In short, the current hybrid KV cache manager will try to allocate all tokens for sliding window layers similar to full attention layers, and then in the next scheduling step, the manager will free unuseful tokens (those outside the sliding window) and turn them into prefix cache in GRAM. This PR, instead, aims to allocate KV cache only for tokens in the sliding window for sliding window layers. This addresses two issues:

  1. When using with an external KV cache layer (e.g., LMCache), over-allocating all prefix tokens for sliding window layers will incur a high memory pressure and can fail when remaining GPU memory is insufficient;
  2. When using with P/D disaggregation connectors, this allocate-then-free pattern will cause data contention, where the connector might copy some KV cache blocks for one request in the background but the manager frees and reuses them for another request.

This PR currently supports only LMCache connector. The support for the other connectors will be added in follow-up PRs.

cc @KuntaiDu @heheda12345

Test Plan

The test script is a modification from the one in PR #25712.

The script should be run with LMCache-side support: LMCache/LMCache#1436.

Caution

Please apply the following patch to LMCache if getting import errors for cdiv:

Patch

diff --git a/lmcache/integration/vllm/vllm_v1_adapter.py b/lmcache/integration/vllm/vllm_v1_adapter.py index a849097..4db64df 100644 --- a/lmcache/integration/vllm/vllm_v1_adapter.py +++ b/lmcache/integration/vllm/vllm_v1_adapter.py @@ -18,7 +18,10 @@ from vllm.distributed.parallel_state import ( get_tp_group, ) from vllm.sampling_params import SamplingParams -from vllm.utils import cdiv +try: + from vllm.utils import cdiv +except ImportError: + from vllm.utils.math_utils import cdiv 

To run this script on H100, please save the following code into test_connector_w_hybrid_kv_allocator.py, and python test_connector_w_hybrid_kv_allocator.py.

`test_connector_w_hybrid_kv_allocator.py`

# SPDX-License-Identifier: Apache-2.0 # SPDX-FileCopyrightText: Copyright contributors to the vLLM project import os # Set token chunk size to 256 os.environ["LMCACHE_CHUNK_SIZE"] = "256" # Enable CPU memory backend os.environ["LMCACHE_LOCAL_CPU"] = "True" # Set CPU memory limit to 5GB os.environ["LMCACHE_MAX_LOCAL_CPU_SIZE"] = "20.0" os.environ["VLLM_ENABLE_V1_MULTIPROCESSING"] = "0" os.environ["LMCACHE_USE_LAYERWISE"] = "True" from vllm import LLM, SamplingParams from vllm.config import KVTransferConfig # Configure KV cache transfer to use LMCache ktc = KVTransferConfig( kv_connector="LMCacheConnectorV1", kv_role="kv_both", ) # Initialize LLM with LMCache configuration # Adjust gpu_memory_utilization based on your GPU memory # Parameters below are for 80GB GPUs llm = LLM( model="google/gemma-3-4b-it", kv_transfer_config=ktc, max_model_len=75000, gpu_memory_utilization=0.28, # gpu_memory_utilization=0.4, # gpu_memory_utilization=0.8, max_num_seqs=16, enforce_eager=True, disable_hybrid_kv_cache_manager=False, ) # Define sampling parameters sampling_params = SamplingParams(temperature=0, top_p=0.95, max_tokens=10) # Run inference print("Generate request 1. This will store long prefix in LMCache.") outputs = llm.generate("hi" * 70000 + "\nhow are you?", sampling_params) generated_text = outputs[0].outputs[0].text print(f"Generated text: {generated_text!r}") # This requires loading KV cache and will succeed print("Generate request 2. This will load prefix from LMCache and succeed.") outputs = llm.generate("hi" * 10000 + "\nTell me a story.", sampling_params) generated_text = outputs[0].outputs[0].text print(f"Generated text: {generated_text!r}") # flush out prefix cache in GPU print("Generate request 3. This will evict prefix cache in GPU.") outputs = llm.generate("1" + "hi" * 70000 + "\nhow are you?", sampling_params) generated_text = outputs[0].outputs[0].text print(f"Generated text: {generated_text!r}") # This requires loading KV cache # but this request cannot be executed as vLLM cannot allocate for long prefix # stored by LMCache print("Generate request 4. This will attempt to load long prefix from LMCache.") outputs = llm.generate("hi" * 70000 + "\nTell me a story.", sampling_params) generated_text = outputs[0].outputs[0].text print(f"Generated text: {generated_text!r}") print("All requests finished.")

Test Result

Previous, we cannot allocate KV cache for the 3rd request which tries to allocate long prefixes and load external KV cache even for sliding window layers. With this PR, the 3rd request can allocate only KV caches needed for sliding window layers and is able to be scheduled and finish with correct results.

Detailed output

Generate request 1. This will store long prefix in LMCache. Adding requests: 100%|██████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 16.21it/s] Processed prompts: 0%| | 0/1 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s][2025-12-05 16:50:01,689] LMCache INFO: Reqid: 0, Total tokens 70006, LMCache hit tokens: 0, need to load: 0 (vllm_v1_adapter.py:1262:lmcache.integration.vllm.vllm_v1_adapter) [2025-12-05 16:50:01,713] LMCache INFO: Post-initializing LMCacheEngine (cache_engine.py:176:lmcache.v1.cache_engine) [2025-12-05 16:50:01,748] LMCache INFO: Storing KV cache for 16384 out of 16384 tokens (skip_leading_tokens=0) for request 0 (vllm_v1_adapter.py:1059:lmcache.integration.vllm.vllm_v1_adapter) [2025-12-05 16:50:01,754] LMCache INFO: Lazily initializing GPU buffer. (gpu_connector.py:1075:lmcache.v1.gpu_connector) [2025-12-05 16:50:01,754] LMCache INFO: Lazily initializing GPU buffer (max tokens=355120). (gpu_connector.py:1098:lmcache.v1.gpu_connector) [2025-12-05 16:50:03,694] LMCache INFO: Storing KV cache for 16384 out of 32768 tokens (skip_leading_tokens=16384) for request 0 (vllm_v1_adapter.py:1059:lmcache.integration.vllm.vllm_v1_adapter) [2025-12-05 16:50:07,326] LMCache INFO: Storing KV cache for 16384 out of 49152 tokens (skip_leading_tokens=32768) for request 0 (vllm_v1_adapter.py:1059:lmcache.integration.vllm.vllm_v1_adapter) [2025-12-05 16:50:12,649] LMCache INFO: Storing KV cache for 16384 out of 65536 tokens (skip_leading_tokens=49152) for request 0 (vllm_v1_adapter.py:1059:lmcache.integration.vllm.vllm_v1_adapter) [2025-12-05 16:50:19,642] LMCache INFO: Storing KV cache for 4470 out of 70006 tokens (skip_leading_tokens=65536) for request 0 (vllm_v1_adapter.py:1059:lmcache.integration.vllm.vllm_v1_adapter) [rank0]:W1205 16:50:21.852000 3690043 .venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py:1358] [0/8] torch._dynamo hit config.recompile_limit (8) [rank0]:W1205 16:50:21.852000 3690043 .venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py:1358] [0/8] function: 'forward_static' (/data/yifanqiao/code/vllm/vllm/model_executor/layers/layernorm.py:274) [rank0]:W1205 16:50:21.852000 3690043 .venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py:1358] [0/8] last reason: 0/7: expected type of 'residual' to be a tensor type, ' but found <class 'NoneType'> [rank0]:W1205 16:50:21.852000 3690043 .venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py:1358] [0/8] To log all recompilation reasons, use TORCH_LOGS="recompiles". [rank0]:W1205 16:50:21.852000 3690043 .venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py:1358] [0/8] To diagnose recompilation issues, see https://pytorch.org/docs/main/torch.compiler_troubleshooting.html Processed prompts: 100%|█████████████████████████| 1/1 [00:20<00:00, 20.59s/it, est. speed input: 3400.30 toks/s, output: 0.49 toks/s] Generated text: '\nI am doing well, thank you for asking' Generate request 2. This will load prefix from LMCache and succeed. Adding requests: 100%|█████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 104.00it/s] Processed prompts: 0%| | 0/1 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s][2025-12-05 16:50:22,290] LMCache INFO: Reqid: 1, Total tokens 10007, LMCache hit tokens: 9984, need to load: 9984 (vllm_v1_adapter.py:1262:lmcache.integration.vllm.vllm_v1_adapter) [2025-12-05 16:50:22,361] LMCache INFO: Retrieved 9984 out of 9984 out of total 9984 tokens (cache_engine.py:645:lmcache.v1.cache_engine) [2025-12-05 16:50:22,361] LMCache INFO: Retrieved 9984 tokens (vllm_v1_adapter.py:978:lmcache.integration.vllm.vllm_v1_adapter) Processed prompts: 100%|███████████████████████| 1/1 [00:00<00:00, 3.21it/s, est. speed input: 32128.69 toks/s, output: 32.11 toks/s] Generated text: "\nOkay, here's a story for you" Generate request 3. This will evict prefix cache in GPU. Adding requests: 100%|██████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 15.21it/s] Processed prompts: 0%| | 0/1 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s][2025-12-05 16:50:22,665] LMCache INFO: Reqid: 2, Total tokens 70007, LMCache hit tokens: 0, need to load: 0 (vllm_v1_adapter.py:1262:lmcache.integration.vllm.vllm_v1_adapter) [2025-12-05 16:50:22,707] LMCache INFO: Storing KV cache for 16384 out of 16384 tokens (skip_leading_tokens=0) for request 2 (vllm_v1_adapter.py:1059:lmcache.integration.vllm.vllm_v1_adapter) [2025-12-05 16:50:24,647] LMCache INFO: Storing KV cache for 16384 out of 32768 tokens (skip_leading_tokens=16384) for request 2 (vllm_v1_adapter.py:1059:lmcache.integration.vllm.vllm_v1_adapter) [2025-12-05 16:50:28,280] LMCache INFO: Storing KV cache for 16384 out of 49152 tokens (skip_leading_tokens=32768) for request 2 (vllm_v1_adapter.py:1059:lmcache.integration.vllm.vllm_v1_adapter) [2025-12-05 16:50:33,595] LMCache INFO: Storing KV cache for 16384 out of 65536 tokens (skip_leading_tokens=49152) for request 2 (vllm_v1_adapter.py:1059:lmcache.integration.vllm.vllm_v1_adapter) [2025-12-05 16:50:40,588] LMCache INFO: Storing KV cache for 4471 out of 70007 tokens (skip_leading_tokens=65536) for request 2 (vllm_v1_adapter.py:1059:lmcache.integration.vllm.vllm_v1_adapter) Processed prompts: 100%|█████████████████████████| 1/1 [00:20<00:00, 20.54s/it, est. speed input: 3408.18 toks/s, output: 0.49 toks/s] Generated text: '\n\nI am doing well, thank you for asking' Generate request 4. This will attempt to load long prefix from LMCache. Adding requests: 100%|██████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 14.73it/s] Processed prompts: 0%| | 0/1 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s][2025-12-05 16:50:43,298] LMCache INFO: Reqid: 3, Total tokens 70007, LMCache hit tokens: 69888, need to load: 69888 (vllm_v1_adapter.py:1262:lmcache.integration.vllm.vllm_v1_adapter) [2025-12-05 16:50:43,530] LMCache INFO: Retrieved 69888 out of 69888 out of total 69888 tokens (cache_engine.py:645:lmcache.v1.cache_engine) [2025-12-05 16:50:43,530] LMCache INFO: Retrieved 69888 tokens (vllm_v1_adapter.py:978:lmcache.integration.vllm.vllm_v1_adapter) Processed prompts: 100%|██████████████████████| 1/1 [00:00<00:00, 1.47it/s, est. speed input: 102891.47 toks/s, output: 14.70 toks/s] Generated text: '\nOkay, here’s a story for you' All requests finished.


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.
@mergify mergify bot added v1 tpu Related to Google TPUs kv-connector labels Dec 6, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for using the hybrid KV cache allocator with a KV cache connector, which is a significant enhancement for models with sliding window attention. The goal is to reduce memory pressure and prevent data contention by allocating KV cache blocks more precisely. The changes are extensive, modifying the core allocation logic in SingleTypeKVCacheManager and propagating these changes up to the KVCacheCoordinator and Scheduler. While the overall approach is sound, the implementation contains several temporary workarounds and comments marked as "REMOVE BEFORE MERGE", which are critical to address. I've identified issues in the KV connector factory, the LMCache connector implementation, and potential bugs or data correctness concerns in single_type_kv_cache_manager.py and block_pool.py. These must be resolved to ensure the stability and correctness of the new functionality.

Comment on lines 59 to 65
## REMOVE BEFORE MERGE (YIFAN): Revert this warning back to raising
# an ValueError.
logger.warning(
"Connector %s does not support HMA but HMA is enabled. Please set "
"--disable-hybrid-kv-cache-manager to disable HMA.",
connector_cls.__name__,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This change from raising a ValueError to a logger.warning is marked with a "REMOVE BEFORE MERGE" comment. Using a connector that does not support Hybrid Memory Allocation (HMA) when HMA is enabled can lead to incorrect behavior or hard-to-debug runtime errors. It is much safer to fail fast with an exception. This change should be reverted to raise ValueError before merging to prevent potential issues in production.

 raise ValueError( f"Connector {connector_cls.__name__} does not support HMA but " f"HMA is enabled. Please set `--disable-hybrid-kv-cache-manager`. )
Comment on lines 37 to 39
## REMOVE BEFORE MERGE (YIFAN): this is temporary workaround to work with
# LMCache. Remove this once having LMCache-side support for new interfaces.
vllm_config.kv_cache_config = kv_cache_config
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This block contains a "REMOVE BEFORE MERGE" comment, indicating a temporary workaround. Directly modifying vllm_config by assigning to kv_cache_config is a side effect that can lead to unexpected behavior elsewhere in the system. This workaround should be removed, and a proper solution that avoids mutating the config object should be implemented as noted in the comment.

Comment on lines +224 to 334
## REMOVE BEFORE MERGE (YIFAN): this is temporary workaround to work with
# LMCache. Remove this once having LMCache-side support for new interfaces.
def request_finished_all_groups(
self,
request: "Request",
block_ids: tuple[list[int], ...],
) -> tuple[bool, dict[str, Any] | None]:
# NOTE: LMCache overloads request_finished so `block_ids` here can be
# either list[int] or tuple[list[int], ...]. This could be changed in
# the future to separate these two methods.
return self._lmcache_engine.request_finished(request, block_ids)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The request_finished_all_groups method is marked as a temporary workaround with a "REMOVE BEFORE MERGE" comment. It appears to be a shim for a new interface required by the hybrid allocator. This temporary implementation should be replaced with a proper solution, and the dependency on this fix in LMCache should be resolved before this pull request is merged.

Comment on lines +294 to +288
## TODO(Yifan): here token_ids may be over-estimated for
## sliding window layers
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The TODO comment indicates that token_ids might be over-estimated for sliding window layers. This could lead to incorrect data in BlockStored events, which could be problematic for external systems consuming these events. If external systems rely on exact token IDs for correctness, this over-estimation could be a significant issue. This should be addressed to ensure data integrity for event consumers.

@mergify
Copy link

mergify bot commented Dec 6, 2025

Hi @ivanium, the pre-commit checks have failed. Please run:

uv pip install pre-commit pre-commit install pre-commit run --all-files

Then, commit the changes and push to your branch.

For future commits, pre-commit will run automatically on changed files before each commit.

@ivanium ivanium force-pushed the feat/partial_ext_token_hit branch from 223fb4d to fa53140 Compare December 6, 2025 06:03
@KuntaiDu
Copy link
Collaborator

KuntaiDu commented Dec 7, 2025

Good work! In terms of landing this PR, @heheda12345 previously suggested me to separate into small PRs and I would prefer the same for this PR.

Example:
Pr 1: don't change the allocation logic at all, simply introduce num_connector_tokens into the allocation API suite, and change the function correspondingly.
Pr 2: build abstractions (example like the get_num_skipped_tokens)
Pr 3: make the estimation of # of blocks accurate
Pr 4: change the allocation logic

…indow, and leading padding with null blocks Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu> fixes Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu> fix get_num_blocks_to_allocate Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
…ocks Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
…ll blocks inside the single_type_block_manager Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
…ng that in a follow-up PR Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
…l_computed_tokens allocation to the same function Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
…efix_lm issue Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
…a request finishes Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
@ivanium ivanium force-pushed the feat/partial_ext_token_hit branch from a5ce0c3 to ab8e7d5 Compare December 23, 2025 00:16
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
@ivanium ivanium force-pushed the feat/partial_ext_token_hit branch from ab8e7d5 to 946776b Compare December 23, 2025 00:17
Signed-off-by: Yifan Qiao <yifanqiao@berkeley.edu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build deepseek Related to DeepSeek models documentation Improvements or additions to documentation frontend gpt-oss Related to GPT-OSS models kv-connector llama Related to Llama models multi-modality Related to multi-modality (#4194) new-model Requests to new models nvidia performance Performance-related issues qwen Related to Qwen models rocm Related to AMD ROCm structured-output tool-calling tpu Related to Google TPUs v1

3 participants