The $qs$ Inequality: Quantifying the Double Penalty of Mixture-of-Experts at Inference
arXiv:2603.08960v1 Announce Type: new Abstract: Mixture-of-Experts (MoE) models deliver high quality at low training FLOPs, but this efficiency often vanishes at inference. We identify a …
Vignesh Adhinarayanan, Nuwan Jayasena
23 views