Don't Act Blindly: Robust GUI Automation via Action-Effect Verification and Self-Correction
arXiv:2604.05477v1 Announce Type: new Abstract: Autonomous GUI agents based on vision-language models (VLMs) often assume deterministic environment responses, generating actions without verifying whether previous operations succeeded. In real-world settings with network latency, rendering delays, and system interruptions, this assumption leads to undetected action failures, repetitive ineffective behaviors, and catastrophic error accumulation. Moreover, learning robust recovery strategies is challenging due to the high cost of online interaction and the lack of real-time feedback in offline datasets.We propose VeriGUI (Verification-driven GUI Agent), which explicitly models action outcomes and recovery under noisy environments. VeriGUI introduces a Thinking--Verification--Action--Expectation (TVAE) framework to detect failures and guide corrective reasoning, and a two-stage training pipeline that combines Robust SFT with synthetic failure trajectories and GRPO with asymmetric verifica
arXiv:2604.05477v1 Announce Type: new Abstract: Autonomous GUI agents based on vision-language models (VLMs) often assume deterministic environment responses, generating actions without verifying whether previous operations succeeded. In real-world settings with network latency, rendering delays, and system interruptions, this assumption leads to undetected action failures, repetitive ineffective behaviors, and catastrophic error accumulation. Moreover, learning robust recovery strategies is challenging due to the high cost of online interaction and the lack of real-time feedback in offline datasets.We propose VeriGUI (Verification-driven GUI Agent), which explicitly models action outcomes and recovery under noisy environments. VeriGUI introduces a Thinking--Verification--Action--Expectation (TVAE) framework to detect failures and guide corrective reasoning, and a two-stage training pipeline that combines Robust SFT with synthetic failure trajectories and GRPO with asymmetric verification rewards. We further construct a Robustness Benchmark based on AndroidControl to evaluate failure recognition and correction. Experiments show that VeriGUI significantly reduces failure loops and improves recovery success while maintaining competitive standard task performance.
Executive Summary
This article presents VeriGUI, a novel GUI automation framework that addresses the limitations of existing vision-language models (VLMs) in real-world settings. By explicitly modeling action outcomes and recovery under noisy environments, VeriGUI introduces a Thinking--Verification--Action--Expectation (TVAE) framework and a two-stage training pipeline. The framework is evaluated through a Robustness Benchmark based on AndroidControl, demonstrating improved failure recognition and correction while maintaining competitive standard task performance. The proposed methodology has significant implications for the development of robust and reliable GUI automation systems.
Key Points
- ▸ VeriGUI introduces a TVAE framework to detect failures and guide corrective reasoning.
- ▸ The framework employs a two-stage training pipeline combining Robust SFT and GRPO with asymmetric verification rewards.
- ▸ A Robustness Benchmark is constructed to evaluate failure recognition and correction.
Merits
Strength
The proposed TVAE framework provides a robust and reliable approach to GUI automation, addressing the limitations of existing VLMs in real-world settings.
Improved Failure Recognition
VeriGUI significantly reduces failure loops and improves recovery success, making it a valuable tool for developers and researchers.
Competitive Standard Task Performance
The framework maintains competitive standard task performance, demonstrating its potential for widespread adoption in GUI automation applications.
Demerits
Limitation
The proposed methodology assumes that the noisy environments can be adequately modeled and simulated, which may not be the case in all real-world settings.
Training Data Requirements
The two-stage training pipeline requires significant amounts of training data, which can be a limitation for smaller-scale projects or those with limited resources.
Expert Commentary
The article presents a significant contribution to the field of GUI automation, addressing a critical limitation of existing VLMs. The proposed TVAE framework and two-stage training pipeline demonstrate a clear understanding of the challenges faced in real-world settings. While the methodology has its limitations, it has significant implications for the development of robust and reliable GUI automation systems. The article's findings have the potential to impact a wide range of applications, including software testing, automation, and user experience enhancement.
Recommendations
- ✓ Future research should focus on further improving the robustness and reliability of GUI automation systems, particularly in the context of noisy and dynamic environments.
- ✓ The proposed methodology should be applied to a wider range of GUI automation applications, including those in critical infrastructure, healthcare, and finance, to demonstrate its potential for real-world impact.
Sources
Original: arXiv - cs.CL