Skip to main content
Academic

QuantVLA: Scale-Calibrated Post-Training Quantization for Vision-Language-Action Models

arXiv:2602.20309v1 Announce Type: new Abstract: Vision-language-action (VLA) models unify perception, language, and control for embodied agents but face significant challenges in practical deployment due to rapidly increasing compute and memory demands, especially as models scale to longer horizons and larger backbones. To address these bottlenecks, we introduce QuantVLA, a training-free post-training quantization (PTQ) framework that, to our knowledge, is the first PTQ approach for VLA systems and the first to successfully quantize a diffusion transformer (DiT) action head. QuantVLA incorporates three scale-calibrated components: (1) a selective quantization layout that integerizes all linear layers in both the language backbone and the DiT while keeping attention projections in floating point to preserve the original operator schedule; (2) attention temperature matching, a lightweight per-head scaling mechanism that stabilizes attention logits and is folded into the dequantization s

arXiv:2602.20309v1 Announce Type: new Abstract: Vision-language-action (VLA) models unify perception, language, and control for embodied agents but face significant challenges in practical deployment due to rapidly increasing compute and memory demands, especially as models scale to longer horizons and larger backbones. To address these bottlenecks, we introduce QuantVLA, a training-free post-training quantization (PTQ) framework that, to our knowledge, is the first PTQ approach for VLA systems and the first to successfully quantize a diffusion transformer (DiT) action head. QuantVLA incorporates three scale-calibrated components: (1) a selective quantization layout that integerizes all linear layers in both the language backbone and the DiT while keeping attention projections in floating point to preserve the original operator schedule; (2) attention temperature matching, a lightweight per-head scaling mechanism that stabilizes attention logits and is folded into the dequantization scales at inference; and (3) output head balancing, a per-layer residual interface calibration that mitigates post-projection energy drift. The framework requires no additional training, uses only a small unlabeled calibration buffer, and supports integer kernels for low-bit weights and activations while leaving the architecture unchanged. Across representative VLA models on LIBERO, QuantVLA exceeds the task success rates of full-precision baselines, achieves about 70% relative memory savings on the quantized components, and delivers a 1.22x speedup in end-to-end inference latency, providing a practical pathway toward scalable low-bit embodied intelligence under strict compute, memory, and power constraints.

Executive Summary

This article introduces QuantVLA, a novel post-training quantization framework for Vision-Language-Action (VLA) models. Unlike existing approaches, QuantVLA is specifically designed for VLA systems and successfully quantizes a diffusion transformer action head. The framework incorporates three scale-calibrated components: selective quantization layout, attention temperature matching, and output head balancing. QuantVLA achieves significant reductions in memory usage, inference latency, and energy consumption, making it a practical pathway toward scalable low-bit embodied intelligence. The framework requires minimal additional training and uses a small unlabeled calibration buffer, supporting integer kernels for low-bit weights and activations. QuantVLA outperforms full-precision baselines on representative VLA models, demonstrating its potential for widespread adoption in AI applications.

Key Points

  • QuantVLA is a novel post-training quantization framework for VLA models, the first to successfully quantize a diffusion transformer action head.
  • QuantVLA incorporates three scale-calibrated components: selective quantization layout, attention temperature matching, and output head balancing.
  • The framework achieves significant reductions in memory usage, inference latency, and energy consumption, with minimal additional training and a small unlabeled calibration buffer.

Merits

Practical Pathway to Scalable Low-Bit Embodied Intelligence

QuantVLA provides a practical pathway toward scalable low-bit embodied intelligence under strict compute, memory, and power constraints.

Significant Energy and Memory Savings

QuantVLA achieves significant reductions in memory usage, inference latency, and energy consumption, making it an attractive solution for AI applications.

Demerits

Limited Evaluation on Diverse VLA Models

The article primarily evaluates QuantVLA on a small set of representative VLA models, leaving room for further evaluation on a broader range of models.

Dependence on High-Precision Baselines

QuantVLA's performance is compared to full-precision baselines, which may not accurately reflect real-world scenarios where computational resources are limited.

Expert Commentary

QuantVLA represents a significant advancement in the field of post-training quantization, particularly for VLA models. The framework's ability to achieve significant energy and memory savings, with minimal additional training and a small unlabeled calibration buffer, makes it an attractive solution for AI applications. However, the limited evaluation on diverse VLA models and dependence on high-precision baselines are areas that require further attention. As the field continues to evolve, it will be essential to evaluate QuantVLA on a broader range of models and explore its potential for real-world applications.

Recommendations

  • Further evaluation of QuantVLA on a diverse range of VLA models to assess its robustness and generalizability.
  • Investigation into the potential of QuantVLA for real-world applications, such as edge computing and IoT devices, to demonstrate its practical value.

Sources