Weak-SIGReg: Covariance Regularization for Stable Deep Learning
arXiv:2603.05924v1 Announce Type: new Abstract: Modern neural network optimization relies heavily on architectural priorssuch as Batch Normalization and Residual connectionsto stabilize training dynamics. Without these, or in low-data regimes with aggressive augmentation, low-bias architectures like Vision Transformers (ViTs) often suffer from optimization collapse. This work adopts Sketched Isotropic Gaussian Regularization (SIGReg), recently introduced in the LeJEPA self-supervised framework, and repurposes it as a general optimization stabilizer for supervised learning. While the original formulation targets the full characteristic function, a computationally efficient variant is derived, Weak-SIGReg, which targets the covariance matrix via random sketching. Inspired by interacting particle systems, representation collapse is viewed as stochastic drift; SIGReg constrains the representation density towards an isotropic Gaussian, mitigating this drift. Empirically, SIGReg recovers th
arXiv:2603.05924v1 Announce Type: new Abstract: Modern neural network optimization relies heavily on architectural priorssuch as Batch Normalization and Residual connectionsto stabilize training dynamics. Without these, or in low-data regimes with aggressive augmentation, low-bias architectures like Vision Transformers (ViTs) often suffer from optimization collapse. This work adopts Sketched Isotropic Gaussian Regularization (SIGReg), recently introduced in the LeJEPA self-supervised framework, and repurposes it as a general optimization stabilizer for supervised learning. While the original formulation targets the full characteristic function, a computationally efficient variant is derived, Weak-SIGReg, which targets the covariance matrix via random sketching. Inspired by interacting particle systems, representation collapse is viewed as stochastic drift; SIGReg constrains the representation density towards an isotropic Gaussian, mitigating this drift. Empirically, SIGReg recovers the training of a ViT on CIFAR-100 from a collapsed 20.73\% to 72.02\% accuracy without architectural hacks and significantly improves the convergence of deep vanilla MLPs trained with pure SGD. Code is available at \href{https://github.com/kreasof-ai/sigreg}{github.com/kreasof-ai/sigreg}.
Executive Summary
This article introduces Weak-SIGReg, a computationally efficient variant of the Sketched Isotropic Gaussian Regularization (SIGReg) method, as a general optimization stabilizer for supervised deep learning. By repurposing SIGReg for supervised learning and leveraging random sketching to target the covariance matrix, Weak-SIGReg effectively constrains the representation density towards an isotropic Gaussian, mitigating representation collapse. Empirical results demonstrate significant improvements in training a Vision Transformer (ViT) on CIFAR-100 and deep vanilla Multi-Layer Perceptrons (MLPs) trained with pure Stochastic Gradient Descent (SGD). This method presents a promising solution for stabilizing deep learning optimization dynamics without relying on architectural hacks. The availability of code on GitHub facilitates further research and implementation.
Key Points
- ▸ Weak-SIGReg is a computationally efficient variant of SIGReg, targeting the covariance matrix via random sketching.
- ▸ SIGReg constrains the representation density towards an isotropic Gaussian, mitigating representation collapse.
- ▸ Empirical results demonstrate significant improvements in training deep neural networks.
Merits
Strength
SIGReg provides a novel approach to stabilizing deep learning optimization dynamics, offering a promising solution for low-bias architectures like Vision Transformers.
Efficiency
Weak-SIGReg is computationally efficient, making it a viable alternative to other optimization stabilizers.
Empirical Support
The article presents empirical results demonstrating significant improvements in training deep neural networks, providing evidence of the method's effectiveness.
Demerits
Limitation
The article focuses on a specific type of regularization, which may not be applicable to all types of deep learning tasks.
Scalability
The computational efficiency of Weak-SIGReg may be compromised when dealing with large-scale datasets or complex neural architectures.
Interpretability
The method's effectiveness relies on the ability to constrain the representation density towards an isotropic Gaussian, which may not be easily interpretable in all scenarios.
Expert Commentary
The introduction of Weak-SIGReg marks an important advancement in the field of deep learning optimization. By leveraging random sketching to target the covariance matrix, this method effectively constrains the representation density towards an isotropic Gaussian, mitigating optimization collapse. The empirical results presented in the article demonstrate significant improvements in training deep neural networks, making Weak-SIGReg a promising solution for stabilizing deep learning optimization dynamics. However, further research is needed to fully understand the method's limitations and scalability. The availability of code on GitHub facilitates further research and implementation, making Weak-SIGReg a valuable contribution to the field.
Recommendations
- ✓ Future research should focus on exploring the applicability of Weak-SIGReg to different types of deep learning tasks and architectures.
- ✓ The development of more interpretable methods for constraining the representation density towards an isotropic Gaussian would be beneficial for understanding the method's effectiveness.