Divide and Conquer: Accelerating Diffusion-Based Large Language Models via Adaptive Parallel Decoding
arXiv:2602.23792v1 Announce Type: new Abstract: Diffusion-based large language models (dLLMs) have shown promising performance across various reasoning tasks, establishing themselves as an alternative to autoregressive large language models (LLMs). Unlike autoregressive LLMs that generate one token per step based on all previous tokens, dLLMs theoretically enable parallel generation of multiple tokens at each decoding step. However, recent dLLMs still favor one-token-per-step generation in practice, as directly decoding multiple masked tokens often leads to degraded generation quality and stability. This reveals a substantial gap between the theoretical parallelism and practical performance of dLLMs. To bridge this gap, we introduce an adaptive parallel decoding approach, namely DiCo, which features a three-phase divide-and-conquer paradigm to unleash the inherent parallelism of dLLMs. During the Divide phase, DiCo first explores the input masked sequence and identifies masked tokens
arXiv:2602.23792v1 Announce Type: new Abstract: Diffusion-based large language models (dLLMs) have shown promising performance across various reasoning tasks, establishing themselves as an alternative to autoregressive large language models (LLMs). Unlike autoregressive LLMs that generate one token per step based on all previous tokens, dLLMs theoretically enable parallel generation of multiple tokens at each decoding step. However, recent dLLMs still favor one-token-per-step generation in practice, as directly decoding multiple masked tokens often leads to degraded generation quality and stability. This reveals a substantial gap between the theoretical parallelism and practical performance of dLLMs. To bridge this gap, we introduce an adaptive parallel decoding approach, namely DiCo, which features a three-phase divide-and-conquer paradigm to unleash the inherent parallelism of dLLMs. During the Divide phase, DiCo first explores the input masked sequence and identifies masked tokens as seed tokens, which are then expanded to construct a set of local clusters. During the Conquer phase, DiCo performs parallel decoding across different local clusters constructed in the Divide phase. The divide-and-conquer process repeatedly alternates between the Divide and Conquer phases until convergence. During the Finalize phase, DiCo decodes the remaining few masked tokens using an effective fine-grained compound decoding scheme to finalize the generation. Extensive experiments demonstrate that DiCo can achieve significant inference speedups while maintaining competitive generation quality.
Executive Summary
This article presents a novel approach called DiCo, an adaptive parallel decoding method that bridges the theoretical parallelism and practical performance gap in diffusion-based large language models (dLLMs). DiCo employs a three-phase divide-and-conquer paradigm to identify masked tokens, construct local clusters, and perform parallel decoding. Experimental results demonstrate significant inference speedups while maintaining competitive generation quality. The DiCo approach has the potential to accelerate the training and deployment of dLLMs, making them more viable alternatives to traditional autoregressive LLMs. However, further research is needed to explore the scalability and generalizability of DiCo in various applications and settings.
Key Points
- ▸ DiCo employs a three-phase divide-and-conquer paradigm to accelerate parallel decoding in dLLMs.
- ▸ The approach identifies masked tokens, constructs local clusters, and performs parallel decoding to achieve significant inference speedups.
- ▸ Experimental results demonstrate competitive generation quality while maintaining significant inference speedups.
Merits
Strength in parallelism
The DiCo approach effectively leverages the inherent parallelism of dLLMs, enabling significant inference speedups without compromising generation quality.
Flexibility and adaptability
DiCo's adaptive nature allows it to adjust to various input sequences and decoding settings, making it a versatile and practical solution.
Demerits
Scalability limitations
Further research is needed to explore the scalability of DiCo in large-scale deployments and complex applications.
Dependence on specific architecture
The DiCo approach may be architecture-specific, limiting its generalizability and applicability to other types of language models.
Expert Commentary
The DiCo approach presents a promising solution to the parallelism-performance gap in dLLMs. By effectively leveraging the inherent parallelism of dLLMs, DiCo achieves significant inference speedups while maintaining competitive generation quality. The adaptive nature of DiCo makes it a versatile and practical solution for various applications. However, further research is needed to explore the scalability and generalizability of DiCo in large-scale deployments and complex applications. The development of more efficient and computationally viable language models like DiCo may have significant implications for the future of natural language processing and artificial intelligence.
Recommendations
- ✓ Future research should focus on exploring the scalability and generalizability of DiCo in various applications and settings.
- ✓ The development of more efficient and computationally viable language models like DiCo may have significant policy implications and should be considered in the context of future research and development in natural language processing and artificial intelligence.