Quantitative Linear Logic for Neuro-Symbolic Learning and Verification
2026-05-13 • Logic in Computer Science
Logic in Computer Science
AI summaryⓘ
The authors discuss differentiable logics, which help neural networks learn by integrating logical rules into their training. They identify a problem that existing methods either focus too much on pure logic or on neural network needs, but not both. To address this, the authors create a new logic called Quantitative Linear Logic (QLL), which uses operations common in machine learning, like sum and log-sum-exp, to balance both concerns. They show that QLL respects important logical rules and that it works well in practice, especially in verifying the network's behavior against attacks.
Differentiable LogicsNeuro-symbolic LearningLogical ConstraintsLoss FunctionFuzzy LogicQuantitative Linear LogicLog-sum-expLinear LogicNeural Network VerificationAdversarial Attacks
Authors
Thomas Flinkow, Ekaterina Komendantskaya, Matteo Capucci, Rosemary Monahan
Abstract
Differentiable Logics are deployed in neuro-symbolic learning tasks as a way of embedding logical constraints in the training objective of neural networks. A differentiable logic consists of a syntax to write logical properties and a semantics to interpret them as real-valued functions to be folded in the loss function. A defining trade-off of the field is that between logical properties of the connectives, and analytic concerns for the semantics, with both aspects being relevant in applications. At one extreme we find fuzzy logics, that have well-established algebraic and proof-theoretic foundations, and at the other ad-hoc differentiable logics like Fischer's DL2, conceived for deep learning applications. However, no satisfactory foundation has emerged yet. We propose a resolution to this long-standing tension via a novel logic, Quantitative Linear Logic (QLL), with foundational ambitions. Our design is driven by naturality -- the idea that, since logical constraints are translated to losses, the semantics of the connectives should be pertinent operations used in ML practice (that is, sum and log-sum-exp) on additive quantities (like logits). We then judge the result on two aspects: logical adequacy -- that they satisfy most of the standard logical laws of Linear Logic; and empirical effectiveness -- test-time performance (as measured by adversarial attacks) is well-correlated to the actual verification of the logical constraints (as measured by off-the-shelf neural network verifiers), which makes QLL stand out among SoTA techniques.