UniHetCO: A Unified Heterogeneous Representation for Multi-Problem Learning in Unsupervised Neural Combinatorial Optimization
arXiv:2603.11456v1 Announce Type: new Abstract: Unsupervised neural combinatorial optimization (NCO) offers an appealing alternative to supervised approaches by training learning-based solvers without ground-truth solutions, directly minimizing instance objectives and constraint violations. Yet for graph node subset-selection problems (e.g., Maximum Clique and Maximum Independent Set), existing unsupervised methods are typically specialized to a single problem class and rely on problem-specific surrogate losses, which hinders learning across classes within a unified framework. In this work, we propose UniHetCO, a unified heterogeneous graph representation for constrained quadratic programming-based combinatorial optimization that encodes problem structure, objective terms, and linear constraints in a single input. This formulation enables training a single model across multiple problem classes with a unified label-free objective. To improve stability under multi-problem learning, we e
arXiv:2603.11456v1 Announce Type: new Abstract: Unsupervised neural combinatorial optimization (NCO) offers an appealing alternative to supervised approaches by training learning-based solvers without ground-truth solutions, directly minimizing instance objectives and constraint violations. Yet for graph node subset-selection problems (e.g., Maximum Clique and Maximum Independent Set), existing unsupervised methods are typically specialized to a single problem class and rely on problem-specific surrogate losses, which hinders learning across classes within a unified framework. In this work, we propose UniHetCO, a unified heterogeneous graph representation for constrained quadratic programming-based combinatorial optimization that encodes problem structure, objective terms, and linear constraints in a single input. This formulation enables training a single model across multiple problem classes with a unified label-free objective. To improve stability under multi-problem learning, we employ a gradient-norm-based dynamic weighting scheme that alleviates gradient imbalance among classes. Experiments on multiple datasets and four constrained problem classes demonstrate competitive performance with state-of-the-art unsupervised NCO baselines, strong cross-problem adaptation potential, and effective warm starts for a commercial classical solver under tight time limits.
Executive Summary
This article introduces UniHetCO, a unified heterogeneous representation for multi-problem learning in unsupervised neural combinatorial optimization. The proposed approach encodes problem structure, objective terms, and linear constraints in a single input, enabling training a single model across multiple problem classes. Experiments demonstrate competitive performance with state-of-the-art unsupervised NCO baselines, strong cross-problem adaptation potential, and effective warm starts for a commercial classical solver. The approach addresses the limitations of existing unsupervised methods, which rely on problem-specific surrogate losses and hinder learning across classes. The formulation enables label-free objective optimization, improving the efficiency and effectiveness of unsupervised NCO. The dynamic weighting scheme alleviates gradient imbalance among classes, enhancing stability under multi-problem learning.
Key Points
- ▸ UniHetCO introduces a unified heterogeneous graph representation for constrained quadratic programming-based combinatorial optimization.
- ▸ The formulation encodes problem structure, objective terms, and linear constraints in a single input.
- ▸ The approach enables training a single model across multiple problem classes with a unified label-free objective.
Merits
Strength in problem agnosticism
UniHetCO decouples problem-specific surrogate losses, allowing for unified learning across multiple problem classes.
Efficient label-free optimization
The formulation enables direct minimization of instance objectives and constraint violations, improving efficiency.
Improved stability under multi-problem learning
The dynamic weighting scheme alleviates gradient imbalance among classes, enhancing stability.
Demerits
Complexity in encoding problem structure
The unified representation may require significant computational resources to generate and process.
Potential for overfitting to specific problem classes
The unified model may not generalize well to new, unseen problem classes.
Expert Commentary
UniHetCO is a significant contribution to the field of neural combinatorial optimization. By decoupling problem-specific surrogate losses, the approach enables unified learning across multiple problem classes. The formulation's efficiency and effectiveness in label-free optimization are notable advantages. However, the complexity of encoding problem structure and potential for overfitting to specific problem classes are limitations that require careful consideration. Further research is needed to explore the scalability and generalizability of UniHetCO.
Recommendations
- ✓ Investigate the scalability of UniHetCO for large-scale problem instances.
- ✓ Explore the application of UniHetCO to other problem classes and domains.