VQKV: High-Fidelity and High-Ratio Cache Compression via Vector-Quantization
arXiv:2603.16435v1 Announce Type: new Abstract: The growing context length of Large Language Models (LLMs) enlarges the Key-Value (KV) cache, limiting deployment in resource-limited environments. Prior …
Yixuan Wang, Qingyu Shi, Jiayu Zhou, Dianbo Liu, Ziwei He, Zhouhan Lin
11 views