LLM-Grounded Explainability for Port Congestion Prediction via Temporal Graph Attention Networks
arXiv:2603.04818v1 Announce Type: new Abstract: Port congestion at major maritime hubs disrupts global supply chains, yet existing prediction systems typically prioritize forecasting accuracy without providing operationally interpretable explanations. This paper proposes AIS-TGNN, an evidence-grounded framework that jointly performs congestion-escalation prediction and faithful natural-language explanation by coupling a Temporal Graph Attention Network (TGAT) with a structured large language model (LLM) reasoning module. Daily spatial graphs are constructed from Automatic Identification System (AIS) broadcasts, where each grid cell represents localized vessel activity and inter-cell interactions are modeled through attention-based message passing. The TGAT predictor captures spatiotemporal congestion dynamics, while model-internal evidence, including feature z-scores and attention-derived neighbor influence, is transformed into structured prompts that constrain LLM reasoning to verifi
arXiv:2603.04818v1 Announce Type: new Abstract: Port congestion at major maritime hubs disrupts global supply chains, yet existing prediction systems typically prioritize forecasting accuracy without providing operationally interpretable explanations. This paper proposes AIS-TGNN, an evidence-grounded framework that jointly performs congestion-escalation prediction and faithful natural-language explanation by coupling a Temporal Graph Attention Network (TGAT) with a structured large language model (LLM) reasoning module. Daily spatial graphs are constructed from Automatic Identification System (AIS) broadcasts, where each grid cell represents localized vessel activity and inter-cell interactions are modeled through attention-based message passing. The TGAT predictor captures spatiotemporal congestion dynamics, while model-internal evidence, including feature z-scores and attention-derived neighbor influence, is transformed into structured prompts that constrain LLM reasoning to verifiable model outputs. To evaluate explanatory reliability, we introduce a directional-consistency validation protocol that quantitatively measures agreement between generated narratives and underlying statistical evidence. Experiments on six months of AIS data from the Port of Los Angeles and Long Beach demonstrate that the proposed framework outperforms both LR and GCN baselines, achieving a test AUC of 0.761, AP of 0.344, and recall of 0.504 under a strict chronological split while producing explanations with 99.6% directional consistency. Results show that grounding LLM generation in graph-model evidence enables interpretable and auditable risk reporting without sacrificing predictive performance. The framework provides a practical pathway toward operationally deployable explainable AI for maritime congestion monitoring and supply-chain risk management.
Executive Summary
This article proposes an evidence-grounded framework, AIS-TGNN, for predicting port congestion and providing faithful natural-language explanations. By coupling a Temporal Graph Attention Network (TGAT) with a structured large language model (LLM) reasoning module, AIS-TGNN captures spatiotemporal congestion dynamics and produces explanations with high directional consistency. The framework outperforms baseline models in terms of predictive performance and explanation quality. The authors introduce a directional-consistency validation protocol to evaluate explanatory reliability and demonstrate the practical application of AIS-TGNN in maritime congestion monitoring and supply-chain risk management. The framework provides a pathway toward operationally deployable explainable AI, enabling interpretable and auditable risk reporting.
Key Points
- ▸ AIS-TGNN is an evidence-grounded framework for predicting port congestion and providing natural-language explanations.
- ▸ The framework uses a Temporal Graph Attention Network (TGAT) to capture spatiotemporal congestion dynamics and a structured LLM to generate explanations.
- ▸ AIS-TGNN outperforms baseline models in terms of predictive performance and explanation quality.
Merits
Strength in Explainability
AIS-TGNN provides faithful natural-language explanations with high directional consistency, enabling interpretable and auditable risk reporting.
Improved Predictive Performance
The framework outperforms baseline models in terms of predictive performance, achieving a test AUC of 0.761 and AP of 0.344.
Demerits
Data Requirements
The framework requires large amounts of data from Automatic Identification System (AIS) broadcasts to construct daily spatial graphs and capture spatiotemporal congestion dynamics.
Expert Commentary
AIS-TGNN represents a significant advancement in the development of explainable AI for complex systems. By coupling a TGAT with a structured LLM, the framework is able to capture the nuances of spatiotemporal congestion dynamics and provide faithful natural-language explanations. The directional-consistency validation protocol is a key innovation, enabling the evaluation of explanatory reliability and the development of more robust explainable AI systems. However, the framework's data requirements and potential scalability limitations should be carefully considered. The implications of AIS-TGNN are far-reaching, with significant potential for application in a range of domains beyond maritime operations.
Recommendations
- ✓ Further research is needed to explore the scalability and generalizability of AIS-TGNN to other complex systems and domains.
- ✓ The framework's directional-consistency validation protocol should be developed and refined to enable more robust evaluation of explanatory reliability.