Rethinking Input Domains in Physics-Informed Neural Networks via Geometric Compactification Mappings
arXiv:2602.16193v1 Announce Type: new Abstract: Several complex physical systems are governed by multi-scale partial differential equations (PDEs) that exhibit both smooth low-frequency components and localized high-frequency structures. Existing physics-informed neural network (PINN) methods typically train with fixed coordinate system inputs, where geometric misalignment with these structures induces gradient stiffness and ill-conditioning that hinder convergence. To address this issue, we introduce a mapping paradigm that reshapes the input coordinates through differentiable geometric compactification mappings and couples the geometric structure of PDEs with the spectral properties of residual operators. Based on this paradigm, we propose Geometric Compactification (GC)-PINN, a framework that introduces three mapping strategies for periodic boundaries, far-field scale expansion, and localized singular structures in the input domain without modifying the underlying PINN architecture
arXiv:2602.16193v1 Announce Type: new Abstract: Several complex physical systems are governed by multi-scale partial differential equations (PDEs) that exhibit both smooth low-frequency components and localized high-frequency structures. Existing physics-informed neural network (PINN) methods typically train with fixed coordinate system inputs, where geometric misalignment with these structures induces gradient stiffness and ill-conditioning that hinder convergence. To address this issue, we introduce a mapping paradigm that reshapes the input coordinates through differentiable geometric compactification mappings and couples the geometric structure of PDEs with the spectral properties of residual operators. Based on this paradigm, we propose Geometric Compactification (GC)-PINN, a framework that introduces three mapping strategies for periodic boundaries, far-field scale expansion, and localized singular structures in the input domain without modifying the underlying PINN architecture. Extensive empirical evaluation demonstrates that this approach yields more uniform residual distributions and higher solution accuracy on representative 1D and 2D PDEs, while improving training stability and convergence speed.
Executive Summary
This article proposes Geometric Compactification (GC)-PINN, a framework that addresses the limitations of traditional physics-informed neural networks (PINNs) by incorporating differentiable geometric compactification mappings. By reshaping input coordinates and coupling the geometric structure of partial differential equations (PDEs) with residual operators, GC-PINN improves solution accuracy, training stability, and convergence speed. The approach is demonstrated on representative 1D and 2D PDEs, showcasing its effectiveness in handling complex systems with smooth low-frequency components and localized high-frequency structures. This framework has significant implications for the field of physics-informed neural networks, particularly in tackling ill-conditioning and gradient stiffness induced by geometric misalignment. Its potential applications in various domains, including computational mechanics and fluid dynamics, make it a valuable contribution to the field.
Key Points
- ▸ Geometric Compactification (GC)-PINN framework addresses limitations of traditional PINNs
- ▸ Differentiable geometric compactification mappings improve solution accuracy and convergence speed
- ▸ GC-PINN demonstrates effectiveness on representative 1D and 2D PDEs
Merits
Improved Solution Accuracy
GC-PINN's ability to reshape input coordinates and couple geometric structure with residual operators leads to more accurate solutions, particularly in systems with localized high-frequency structures.
Enhanced Training Stability
The framework's incorporation of geometric compactification mappings significantly improves training stability, reducing the risk of ill-conditioning and gradient stiffness.
Increased Convergence Speed
GC-PINN's optimized architecture enables faster convergence rates, making it a more efficient approach for solving complex PDEs.
Demerits
Computational Complexity
The introduction of geometric compactification mappings may increase computational complexity, potentially affecting the framework's scalability and applicability to large-scale systems.
Limited Generalizability
While GC-PINN demonstrates effectiveness on specific PDEs, its generalizability to other systems and domains requires further investigation and validation.
Expert Commentary
The introduction of Geometric Compactification (GC)-PINN marks a significant advancement in the field of physics-informed neural networks. By addressing the limitations of traditional PINNs, GC-PINN offers a more efficient and effective approach for solving complex partial differential equations. The framework's ability to reshape input coordinates and couple geometric structure with residual operators is a notable innovation. However, further investigation is required to fully explore the framework's potential and limitations. In particular, the impact of geometric compactification mappings on computational complexity and generalizability must be carefully evaluated. Nonetheless, GC-PINN is a valuable contribution to the field, and its implications for future research and applications are substantial.
Recommendations
- ✓ Further research is necessary to fully explore the potential and limitations of GC-PINN, including its applicability to various domains and systems.
- ✓ The development of GC-PINN highlights the importance of geometric analysis in neural network design, emphasizing the need for further investigation into the role of geometric compactification in PINN frameworks.