Free Energy Manifold: Score-Based Inference for Hybrid Bayesian Networks

2026-05-11Machine Learning

Machine LearningArtificial Intelligence
AI summary

The authors present the Free Energy Manifold (FEM), a model for making sense of data from hybrid Bayesian networks that have both discrete and continuous parts. FEM models probabilities using energy landscapes, which helps it better estimate posterior probabilities, generate new data, and combine information from multiple sources. They found that usual models can mistakenly be too confident in uncertain areas, which they call the mode-bridge artifact, and they fix this with a new technique called valley regularization. Their tests show FEM improves accuracy especially in complex cases with multiple modes or combined evidence, though traditional classifiers still work better for straightforward classification tasks.

Free Energy ManifoldBayesian networksconditional energy modeldiscrete and continuous variablesposterior evaluationmode-bridge artifactvalley regularizationmultimodal inferencecompositional inferenceKL divergence
Authors
Cheol Young Park, Shou Matsumoto
Abstract
We introduce the Free Energy Manifold (FEM), a score-trained conditional energy model specialized for inference in hybrid Bayesian networks with discrete and continuous variables. FEM represents each conditional factor as an energy landscape over learned discrete-parent embeddings and continuous observations, enabling posterior evaluation, generative sampling, and compositional inference across multiple continuous leaves by energy addition under conditional independence. A central finding is the mode-bridge artifact: standard conditional energy models can create low-energy ridges between separated modes of the same class, producing overconfident posteriors at off-data interior points. We analyze this failure and propose valley regularization, an off-data calibration term that restores near-uniform posteriors in such regions while preserving in-data fit. Across synthetic multimodal hybrid-BN benchmarks, FEM substantially reduces KL divergence relative to classical baselines and a vanilla conditional EBM, including large gains at mode-bridge midpoint queries and in multi-leaf evidence composition. We also evaluate high-cardinality discrete-parent settings and a UCI Breast Cancer sanity check, showing that FEM is most useful when multimodal or compositional Bayesian-network inference is required, while discriminative classifiers remain preferable for closed-world classification tasks.