Relaxation-Informed Training of Neural Network Surrogate Models
2026-04-24 • Machine Learning
Machine Learning
AI summaryⓘ
The authors study how to train neural networks so that they can be more easily used inside certain optimization problems called mixed-integer linear programs (MILPs). Normally, standard training focuses on accuracy but doesn't help make these optimization problems easier to solve. They propose special training methods that add penalties to the network to reduce elements causing complexity in the MILP, like large constants and uncertain neurons. Their experiments show that these methods can make solving MILPs thousands of times faster without losing much accuracy.
ReLU neural networksmixed-integer linear programs (MILPs)global optimizationtraining regularizersbig-M formulationunstable neuronsLP relaxation gapsurrogate modelsstochastic programming
Authors
Calvin Tsay
Abstract
ReLU neural networks trained as surrogate models can be embedded exactly in mixed-integer linear programs (MILPs), enabling global optimization over the learned function. The tractability of the resulting MILP depends on structural properties of the network, i.e., the number of binary variables in associated formulations and the tightness of the continuous LP relaxation. These properties are determined during training, yet standard training objectives (prediction loss with classical weight regularization) offer no mechanism to directly control them. This work studies training regularizers that directly target downstream MILP tractability. Specifically, we propose simple bound-based regularizers that penalize the big-M constants of MILP formulations and/or the number of unstable neurons. Moreover, we introduce an LP relaxation gap regularizer that explicitly penalizes the per-sample gap of the continuous relaxation at training points. We derive its associated gradient and provide an implementation from LP dual variables without custom automatic differentiation tools. We show that combining the above regularizers can approximate the full total derivative of the LP gap with respect to the network parameters, capturing both direct and indirect sensitivities. Experiments on non-convex benchmark functions and a two-stage stochastic programming problem with quantile neural network surrogates demonstrate that the proposed regularizers can reduce MILP solve times by up to four orders of magnitude relative to an unregularized baseline, while maintaining competitive surrogate model accuracy.