Chebyshev Center-Based Direction Selection for Multi-Objective Optimization and Training PINNs

2026-05-11Machine Learning

Machine Learning
AI summary

The authors address the challenge of training physics-informed neural networks (PINNs) by finding better ways to combine multiple loss terms from physics equations and conditions. Instead of manually enforcing different desired properties, they propose a new method that uses a geometric principle called the Chebyshev-center to pick the best update direction during training. This approach simplifies the problem, improves efficiency, guarantees convergence, and naturally includes previous methods' benefits without extra constraints. Their experiments show that this method works well in practice.

Physics-informed neural networksPartial differential equationsOptimizationLoss functionChebyshev centerDual coneConvergence guaranteeNonconvex optimizationGradient descentGeometric interpretation
Authors
Hoyeol Yoon, Seoungbin Bae, Nam Ho-Nguyen, Dabeen Lee
Abstract
Physics-informed neural networks (PINNs) are a promising approach for solving partial differential equations (PDEs). Their training, however, is often difficult because multiple loss terms induced by PDE residuals and boundary or initial conditions must be optimized simultaneously. To address this difficulty, existing approaches often construct update directions by explicitly enforcing particular desirable properties, such as scale robustness and simultaneous descent. While effective in many cases, such property-by-property designs can make it unclear which conditions are essential, what geometric principle determines the selected update direction, and how different methods are structurally related. In this work, we formulate update-direction selection for PINN training as a Chebyshev-center problem in the dual cone. The proposed formulation selects a normalized direction that maximizes the minimum distance to the cone facets. The resulting formulation admits an efficient dual problem in a much lower-dimensional space and yields a convergence guarantee in the nonconvex setting. It also recovers the key desirable properties targeted by existing approaches without imposing them separately; rather, they follow from the single geometric criterion underlying the formulation. This makes the selected direction interpretable through a single geometric rule and provides a unified basis for systematically comparing related direction-selection methods. Experiments on several PINN benchmarks further demonstrate strong empirical performance of the proposed method.