Academic

PlotChain: Deterministic Checkpointed Evaluation of Multimodal LLMs on Engineering Plot Reading

arXiv:2602.13232v1 Announce Type: new Abstract: We present PlotChain, a deterministic, generator-based benchmark for evaluating multimodal large language models (MLLMs) on engineering plot reading-recovering quantitative values from classic plots (e.g., Bode/FFT, step response, stress-strain, pump curves) rather than OCR-only extraction or free-form captioning. PlotChain contains 15 plot families with 450 rendered plots (30 per family), where every item is produced from known parameters and paired with exact ground truth computed directly from the generating process. A central contribution is checkpoint-based diagnostic evaluation: in addition to final targets, each item includes intermediate 'cp_' fields that isolate sub-skills (e.g., reading cutoff frequency or peak magnitude) and enable failure localization within a plot family. We evaluate four state-of-the-art MLLMs under a standardized, deterministic protocol (temperature = 0 and a strict JSON-only numeric output schema) and sco

M
Mayank Ravishankara
· · 1 min read · 20 views

arXiv:2602.13232v1 Announce Type: new Abstract: We present PlotChain, a deterministic, generator-based benchmark for evaluating multimodal large language models (MLLMs) on engineering plot reading-recovering quantitative values from classic plots (e.g., Bode/FFT, step response, stress-strain, pump curves) rather than OCR-only extraction or free-form captioning. PlotChain contains 15 plot families with 450 rendered plots (30 per family), where every item is produced from known parameters and paired with exact ground truth computed directly from the generating process. A central contribution is checkpoint-based diagnostic evaluation: in addition to final targets, each item includes intermediate 'cp_' fields that isolate sub-skills (e.g., reading cutoff frequency or peak magnitude) and enable failure localization within a plot family. We evaluate four state-of-the-art MLLMs under a standardized, deterministic protocol (temperature = 0 and a strict JSON-only numeric output schema) and score predictions using per-field tolerances designed to reflect human plot-reading precision. Under the 'plotread' tolerance policy, the top models achieve 80.42% (Gemini 2.5 Pro), 79.84% (GPT-4.1), and 78.21% (Claude Sonnet 4.5) overall field-level pass rates, while GPT-4o trails at 61.59%. Despite strong performance on many families, frequency-domain tasks remain brittle: bandpass response stays low (<= 23%), and FFT spectrum remains challenging. We release the generator, dataset, raw model outputs, scoring code, and manifests with checksums to support fully reproducible runs and retrospective rescoring under alternative tolerance policies.

Executive Summary

The article introduces PlotChain, a novel benchmark for evaluating multimodal large language models (MLLMs) on engineering plot reading tasks. PlotChain focuses on recovering quantitative values from various engineering plots, distinguishing itself from traditional OCR-based or free-form captioning approaches. The benchmark includes 15 plot families with 450 rendered plots, each generated from known parameters and paired with exact ground truth values. A key innovation is the checkpoint-based diagnostic evaluation, which allows for the isolation of sub-skills and failure localization within plot families. The study evaluates four state-of-the-art MLLMs under a standardized protocol, revealing that while top models achieve high overall field-level pass rates, frequency-domain tasks remain challenging.

Key Points

  • PlotChain is a deterministic, generator-based benchmark for evaluating MLLMs on engineering plot reading.
  • The benchmark includes 15 plot families with 450 rendered plots, each with known parameters and exact ground truth values.
  • Checkpoint-based diagnostic evaluation allows for the isolation of sub-skills and failure localization.
  • Top MLLMs achieve high overall field-level pass rates, but frequency-domain tasks remain challenging.
  • The study provides a fully reproducible dataset and scoring code for retrospective rescoring.

Merits

Innovative Benchmark Design

PlotChain's deterministic and generator-based approach ensures that each plot is produced from known parameters, providing exact ground truth values. This design enhances the reliability and reproducibility of the benchmark.

Comprehensive Evaluation Protocol

The use of checkpoint-based diagnostic evaluation allows for detailed failure localization and sub-skill isolation, providing valuable insights into model performance.

Standardized Evaluation

The standardized protocol, including temperature settings and strict JSON-only numeric output schema, ensures consistent and comparable evaluations across different MLLMs.

Demerits

Limited Scope of Plot Families

While the benchmark includes 15 plot families, the scope may still be limited in terms of the diversity of engineering plots encountered in real-world applications.

Challenges in Frequency-Domain Tasks

The study highlights that frequency-domain tasks, such as bandpass response and FFT spectrum, remain challenging for MLLMs, indicating areas for further improvement.

Potential Bias in Model Selection

The evaluation focuses on four state-of-the-art MLLMs, which may not represent the full spectrum of models available. A more diverse selection could provide a broader perspective on model performance.

Expert Commentary

PlotChain represents a significant advancement in the evaluation of multimodal large language models for engineering plot reading. The deterministic and generator-based approach ensures high reliability and reproducibility, addressing a critical need in the field. The checkpoint-based diagnostic evaluation is particularly noteworthy, as it provides detailed insights into model performance and failure modes. However, the study also highlights the challenges in frequency-domain tasks, indicating areas for further research and development. The standardized evaluation protocol and the release of the dataset and scoring code are commendable, as they support fully reproducible runs and retrospective rescoring. This study not only contributes to the technical literature but also has practical implications for engineering applications and policy decisions regarding AI technologies.

Recommendations

  • Expand the scope of plot families to include a more diverse range of engineering plots encountered in real-world applications.
  • Investigate the challenges in frequency-domain tasks and develop targeted improvements for MLLMs to enhance performance in these areas.

Sources