Scalable Multilingual Multimodal Machine Translation with Speech-Text Fusion
arXiv:2602.21646v1 Announce Type: new Abstract: Multimodal Large Language Models (MLLMs) have achieved notable success in enhancing translation performance by integrating multimodal information. However, existing research primarily focuses on image-guided methods, whose applicability is constrained by the scarcity of multilingual image-text pairs. The speech modality overcomes this limitation due to its natural alignment with text and the abundance of existing speech datasets, which enable scalable language coverage. In this paper, we propose a Speech-guided Machine Translation (SMT) framework that integrates speech and text as fused inputs into an MLLM to improve translation quality. To mitigate reliance on low-resource data, we introduce a Self-Evolution Mechanism. The core components of this framework include a text-to-speech model, responsible for generating synthetic speech, and an MLLM capable of classifying synthetic speech samples and iteratively optimizing itself using positi
arXiv:2602.21646v1 Announce Type: new Abstract: Multimodal Large Language Models (MLLMs) have achieved notable success in enhancing translation performance by integrating multimodal information. However, existing research primarily focuses on image-guided methods, whose applicability is constrained by the scarcity of multilingual image-text pairs. The speech modality overcomes this limitation due to its natural alignment with text and the abundance of existing speech datasets, which enable scalable language coverage. In this paper, we propose a Speech-guided Machine Translation (SMT) framework that integrates speech and text as fused inputs into an MLLM to improve translation quality. To mitigate reliance on low-resource data, we introduce a Self-Evolution Mechanism. The core components of this framework include a text-to-speech model, responsible for generating synthetic speech, and an MLLM capable of classifying synthetic speech samples and iteratively optimizing itself using positive samples. Experimental results demonstrate that our framework surpasses all existing methods on the Multi30K multimodal machine translation benchmark, achieving new state-of-the-art results. Furthermore, on general machine translation datasets, particularly the FLORES-200, it achieves average state-of-the-art performance in 108 translation directions. Ablation studies on CoVoST-2 confirms that differences between synthetic and authentic speech have negligible impact on translation quality. The code and models are released at https://github.com/yxduir/LLM-SRT.
Executive Summary
This article proposes a Speech-guided Machine Translation (SMT) framework that integrates speech and text as fused inputs into a Multimodal Large Language Model (MLLM) to improve translation quality. The framework includes a text-to-speech model and an MLLM that iteratively optimizes itself using positive samples. Experimental results show that the framework achieves state-of-the-art results on the Multi30K multimodal machine translation benchmark and average state-of-the-art performance on the FLORES-200 dataset. The code and models are released, enabling further research and development.
Key Points
- ▸ The SMT framework integrates speech and text as fused inputs into an MLLM
- ▸ The framework includes a text-to-speech model and an MLLM with a Self-Evolution Mechanism
- ▸ The framework achieves state-of-the-art results on the Multi30K and FLORES-200 datasets
Merits
Scalability
The framework enables scalable language coverage due to the abundance of existing speech datasets
Improved Translation Quality
The framework achieves state-of-the-art results on multimodal machine translation benchmarks
Demerits
Dependence on Synthetic Speech
The framework relies on synthetic speech generated by a text-to-speech model, which may not perfectly replicate authentic speech
Expert Commentary
The proposed SMT framework is a significant advancement in the field of multimodal machine translation. The integration of speech and text as fused inputs into an MLLM has the potential to improve translation quality and enable more accurate communication across languages. The use of a Self-Evolution Mechanism to mitigate reliance on low-resource data is also a notable contribution. However, further research is needed to fully explore the limitations and potential applications of the framework.
Recommendations
- ✓ Further research should be conducted to explore the potential applications of the SMT framework in real-world machine translation tasks
- ✓ The framework should be evaluated on a wider range of languages and datasets to assess its scalability and effectiveness