Skip to main content
Academic

Fast Online Learning with Gaussian Prior-Driven Hierarchical Unimodal Thompson Sampling

arXiv:2602.15972v1 Announce Type: new Abstract: We study a type of Multi-Armed Bandit (MAB) problems in which arms with a Gaussian reward feedback are clustered. Such an arm setting finds applications in many real-world problems, for example, mmWave communications and portfolio management with risky assets, as a result of the universality of the Gaussian distribution. Based on the Thompson Sampling algorithm with Gaussian prior (TSG) algorithm for the selection of the optimal arm, we propose our Thompson Sampling with Clustered arms under Gaussian prior (TSCG) specific to the 2-level hierarchical structure. We prove that by utilizing the 2-level structure, we can achieve a lower regret bound than we do with ordinary TSG. In addition, when the reward is Unimodal, we can reach an even lower bound on the regret by our Unimodal Thompson Sampling algorithm with Clustered Arms under Gaussian prior (UTSCG). Each of our proposed algorithms are accompanied by theoretical evaluation of the uppe

T
Tianchi Zhao, He Liu, Hongyin Shi, Jinliang Li
· · 1 min read · 5 views

arXiv:2602.15972v1 Announce Type: new Abstract: We study a type of Multi-Armed Bandit (MAB) problems in which arms with a Gaussian reward feedback are clustered. Such an arm setting finds applications in many real-world problems, for example, mmWave communications and portfolio management with risky assets, as a result of the universality of the Gaussian distribution. Based on the Thompson Sampling algorithm with Gaussian prior (TSG) algorithm for the selection of the optimal arm, we propose our Thompson Sampling with Clustered arms under Gaussian prior (TSCG) specific to the 2-level hierarchical structure. We prove that by utilizing the 2-level structure, we can achieve a lower regret bound than we do with ordinary TSG. In addition, when the reward is Unimodal, we can reach an even lower bound on the regret by our Unimodal Thompson Sampling algorithm with Clustered Arms under Gaussian prior (UTSCG). Each of our proposed algorithms are accompanied by theoretical evaluation of the upper regret bound, and our numerical experiments confirm the advantage of our proposed algorithms.

Executive Summary

This study proposes two novel algorithms, Thompson Sampling with Clustered arms under Gaussian prior (TSCG) and Unimodal Thompson Sampling algorithm with Clustered Arms under Gaussian prior (UTSCG), for Multi-Armed Bandit (MAB) problems with Gaussian reward feedback. The authors demonstrate that these algorithms outperform the standard Thompson Sampling algorithm with Gaussian prior (TSG) by exploiting the 2-level hierarchical structure of the problem. Theoretical evaluation of the upper regret bound and numerical experiments confirm the advantage of the proposed algorithms. The study has significant implications for real-world applications such as mmWave communications and portfolio management with risky assets.

Key Points

  • Proposes two novel algorithms, TSCG and UTSCG, for MAB problems with Gaussian reward feedback
  • Exploits the 2-level hierarchical structure to achieve lower regret bounds
  • Theoretical evaluation and numerical experiments confirm the advantage of the proposed algorithms

Merits

Novel Approach

The study provides a novel approach to solving MAB problems with Gaussian reward feedback by exploiting the 2-level hierarchical structure, which is not addressed in existing literature.

Improved Performance

The proposed algorithms, TSCG and UTSCG, achieve lower regret bounds compared to the standard Thompson Sampling algorithm with Gaussian prior, demonstrating improved performance.

Demerits

Assumptions

The study assumes a 2-level hierarchical structure, which may not be applicable to all real-world MAB problems. Relaxing these assumptions would be a natural extension of the work.

Limited Generalizability

The proposed algorithms are specifically designed for MAB problems with Gaussian reward feedback. Extending the study to other types of reward feedback or problem structures would be necessary for broader applicability.

Expert Commentary

The study provides a significant contribution to the field of MAB problems by proposing novel algorithms that exploit the 2-level hierarchical structure. While the assumptions made in the study are reasonable, relaxing these assumptions would be a natural extension of the work. Furthermore, the study's focus on Gaussian reward feedback limits its generalizability. Nevertheless, the proposed algorithms demonstrate improved performance compared to existing methods, making them a valuable addition to the literature.

Recommendations

  • Future studies should investigate the applicability of the proposed algorithms to other types of reward feedback or problem structures.
  • The authors should explore relaxation of the assumptions made in the study to broaden its applicability.

Sources