Academic

MiniAppBench: Evaluating the Shift from Text to Interactive HTML Responses in LLM-Powered Assistants

arXiv:2603.09652v1 Announce Type: new Abstract: With the rapid advancement of Large Language Models (LLMs) in code generation, human-AI interaction is evolving from static text responses to dynamic, interactive HTML-based applications, which we term MiniApps. These applications require models to not only render visual interfaces but also construct customized interaction logic that adheres to real-world principles. However, existing benchmarks primarily focus on algorithmic correctness or static layout reconstruction, failing to capture the capabilities required for this new paradigm. To address this gap, we introduce MiniAppBench, the first comprehensive benchmark designed to evaluate principle-driven, interactive application generation. Sourced from a real-world application with 10M+ generations, MiniAppBench distills 500 tasks across six domains (e.g., Games, Science, and Tools). Furthermore, to tackle the challenge of evaluating open-ended interactions where no single ground truth

arXiv:2603.09652v1 Announce Type: new Abstract: With the rapid advancement of Large Language Models (LLMs) in code generation, human-AI interaction is evolving from static text responses to dynamic, interactive HTML-based applications, which we term MiniApps. These applications require models to not only render visual interfaces but also construct customized interaction logic that adheres to real-world principles. However, existing benchmarks primarily focus on algorithmic correctness or static layout reconstruction, failing to capture the capabilities required for this new paradigm. To address this gap, we introduce MiniAppBench, the first comprehensive benchmark designed to evaluate principle-driven, interactive application generation. Sourced from a real-world application with 10M+ generations, MiniAppBench distills 500 tasks across six domains (e.g., Games, Science, and Tools). Furthermore, to tackle the challenge of evaluating open-ended interactions where no single ground truth exists, we propose MiniAppEval, an agentic evaluation framework. Leveraging browser automation, it performs human-like exploratory testing to systematically assess applications across three dimensions: Intention, Static, and Dynamic. Our experiments reveal that current LLMs still face significant challenges in generating high-quality MiniApps, while MiniAppEval demonstrates high alignment with human judgment, establishing a reliable standard for future research. Our code is available in github.com/MiniAppBench.

Executive Summary

This article introduces MiniAppBench, a comprehensive benchmark for evaluating principle-driven, interactive application generation in Large Language Models (LLMs). The benchmark consists of 500 tasks across six domains and proposes MiniAppEval, an agentic evaluation framework to assess applications. The authors demonstrate the challenges of current LLMs in generating high-quality MiniApps and establish MiniAppEval as a reliable standard for future research. The article contributes to the advancement of human-AI interaction and provides a foundation for further research in LLM-powered assistants. The code is available on GitHub, making it accessible for replication and extension.

Key Points

  • Introduction of MiniAppBench, a comprehensive benchmark for interactive application generation
  • Proposal of MiniAppEval, an agentic evaluation framework for assessing applications
  • Demonstration of challenges in current LLMs for generating high-quality MiniApps

Merits

Comprehensive Benchmark

MiniAppBench provides a thorough evaluation framework for principle-driven, interactive application generation, covering 500 tasks across six domains.

Agentic Evaluation Framework

MiniAppEval offers a systematic approach to evaluating open-ended interactions, leveraging browser automation to assess applications across three dimensions.

Demerits

Limited Scope

The benchmark and evaluation framework are currently limited to six domains, which may not capture the full complexity of human-AI interaction.

Dependency on LLM Capabilities

The success of MiniAppBench and MiniAppEval relies on the capabilities of current LLMs, which may limit their applicability and accuracy.

Expert Commentary

The article presents a significant contribution to the field of human-AI interaction, providing a comprehensive benchmark and evaluation framework for principle-driven, interactive application generation. However, the limitations of the current scope and the dependency on LLM capabilities should be acknowledged and addressed in future research. The implications of this work are far-reaching, with potential applications in the development of more sophisticated LLM-powered assistants and the establishment of new standards for evaluation. As the field continues to evolve, it is essential to address the challenges and limitations identified in this article.

Recommendations

  • Future research should focus on expanding the scope of MiniAppBench and MiniAppEval to include more domains and complexity levels.
  • Developers of LLM-powered assistants should prioritize improving the capabilities of their models to generate high-quality interactive applications.

Sources