HBVLA: Pushing 1-Bit Post-Training Quantization for Vision-Language-Action Models
arXiv:2602.13710v1 Announce Type: new Abstract: Vision-Language-Action (VLA) models enable instruction-following embodied control, but their large compute and memory footprints hinder deployment on resource-constrained robots and …
Xin Yan, Zhenglin Wan, Feiyang Ye, Xingrui Yu, Hangyu Du, Yang You, Ivor Tsang
14 views