Academic

Rank-Factorized Implicit Neural Bias: Scaling Super-Resolution Transformer with FlashAttention

arXiv:2603.06738v1 Announce Type: new Abstract: Recent Super-Resolution~(SR) methods mainly adopt Transformers for their strong long-range modeling capability and exceptional representational capacity. However, most SR Transformers rely heavily on relative positional bias~(RPB), which prevents them from leveraging hardware-efficient attention kernels such as FlashAttention. This limitation imposes a prohibitive computational burden during both training and inference, severely restricting attempts to scale SR Transformers by enlarging the training patch size or the self-attention window. Consequently, unlike other domains that actively exploit the inherent scalability of Transformers, SR Transformers remain heavily focused on effectively utilizing limited receptive fields. In this paper, we propose Rank-factorized Implicit Neural Bias~(RIB), an alternative to RPB that enables FlashAttention in SR Transformers. Specifically, RIB approximates positional bias using low-rank implicit neura

D
Dongheon Lee, Seokju Yun, Jaegyun Im, Youngmin Ro
· · 1 min read · 8 views

arXiv:2603.06738v1 Announce Type: new Abstract: Recent Super-Resolution~(SR) methods mainly adopt Transformers for their strong long-range modeling capability and exceptional representational capacity. However, most SR Transformers rely heavily on relative positional bias~(RPB), which prevents them from leveraging hardware-efficient attention kernels such as FlashAttention. This limitation imposes a prohibitive computational burden during both training and inference, severely restricting attempts to scale SR Transformers by enlarging the training patch size or the self-attention window. Consequently, unlike other domains that actively exploit the inherent scalability of Transformers, SR Transformers remain heavily focused on effectively utilizing limited receptive fields. In this paper, we propose Rank-factorized Implicit Neural Bias~(RIB), an alternative to RPB that enables FlashAttention in SR Transformers. Specifically, RIB approximates positional bias using low-rank implicit neural representations and concatenates them with pixel content tokens in a channel-wise manner, turning the element-wise bias addition in attention score computation into a dot-product operation. Further, we introduce a convolutional local attention and a cyclic window strategy to fully leverage the advantages of long-range interactions enabled by RIB and FlashAttention. We enlarge the window size up to \textbf{96$\times$96} while jointly scaling the training patch size and the dataset size, maximizing the benefits of Transformers in the SR task. As a result, our network achieves \textbf{35.63\,dB PSNR} on Urban100$\times$2, while reducing training and inference time by \textbf{2.1$\times$} and \textbf{2.9$\times$}, respectively, compared to the RPB-based SR Transformer~(PFT).

Executive Summary

This article proposes Rank-factorized Implicit Neural Bias (RIB), an alternative to relative positional bias, to enable FlashAttention in Super-Resolution Transformers. RIB approximates positional bias using low-rank implicit neural representations and concatenates them with pixel content tokens, improving computational efficiency and enabling larger window sizes. The proposed approach achieves significant improvements in PSNR and reduces training and inference time compared to existing methods. The cyclic window strategy and convolutional local attention further enhance the model's performance. This work has the potential to significantly impact the field of Super-Resolution and image processing, by leveraging the scalability of Transformers and enabling more efficient and effective processing of large images.

Key Points

  • Proposes Rank-factorized Implicit Neural Bias (RIB) as an alternative to relative positional bias
  • Enables FlashAttention in Super-Resolution Transformers
  • Achieves significant improvements in PSNR and reduces training and inference time

Merits

Strength in Computational Efficiency

RIB enables more efficient computation by approximating positional bias using low-rank implicit neural representations, leading to improved scalability and reduced training and inference time

Enhanced Model Performance

The cyclic window strategy and convolutional local attention further enhance the model's performance, enabling larger window sizes and more effective processing of large images

Demerits

Limited Exploration of RIB's Generalizability

The article focuses on the application of RIB in Super-Resolution Transformers, and its generalizability to other domains or tasks is not thoroughly explored

Potential Overreliance on Hardware-Efficient Attention Kernels

The proposed approach relies heavily on FlashAttention, which may limit its applicability to other hardware architectures or scenarios

Expert Commentary

This article represents a significant advance in the field of Super-Resolution, leveraging the scalability of Transformers and enabling more efficient and effective processing of large images. While the proposed approach has several merits, including improved computational efficiency and enhanced model performance, it also has some limitations, such as limited exploration of RIB's generalizability and potential overreliance on hardware-efficient attention kernels. Nevertheless, the article's findings have important implications for both practical and policy-related considerations, and it is likely to have a significant impact on the development of image processing technologies in the future.

Recommendations

  • Further exploration of RIB's generalizability to other domains or tasks is necessary to fully understand its potential applications
  • The use of RIB and FlashAttention should be evaluated in other computationally intensive tasks to assess its broader applicability

Sources