What is Learnable in Valiant's Theory of the Learnable?

2026-05-13Data Structures and Algorithms

Data Structures and AlgorithmsMachine Learning
AI summary

The authors revisit the learning model originally introduced by Valiant in 1984, which differs from the well-known PAC learning model because the learner only gets positive examples and can ask membership queries. They show that a class of concepts is learnable in this model if and only if there is a way to compress positive samples through a short interaction involving queries. Their findings demonstrate that adding membership queries enables learning some classes that are not learnable without them, placing this model strictly between PAC learning and Valiant's no-query variant. They also provide the first learning algorithm for halfspaces in this setting and prove necessary limits on sample and query complexity. Overall, their work reveals important structural properties of Valiant's original model and extends understanding of how queries affect learnability.

PAC learningValiant's learning modelmembership queriessample compressionadaptive query-compressionhalfspacesBoolean hypercubelearnabilitysample complexitycomputational complexity
Authors
Steve Hanneke, Anay Mehrotra, Grigoris Velegkas, Manolis Zampetakis
Abstract
Valiant's 1984 paper is widely credited with introducing the PAC learning model, but it, in fact, introduced a different model: unlike PAC learning, the learner receives only positives, may issue membership queries, and must output a hypothesis with no false positives. Prior work characterized variants, including the case without queries. We revisit Valiant's original model and ask: *Which classes are learnable in it?* For every finite domain, including Valiant's Boolean-hypercube setting, we show that a class is learnable if and only if every realizable positive sample can be certified by a poly-size adaptive query-compression scheme. This is a new variant of sample compression where the learner certifies samples via a short interaction with the membership oracle. Our characterization shows that learnability in Valiant's model is strictly sandwiched between learnability in the PAC model and the variant of Valiant's model without membership queries. This is one of the rare cases where introducing membership queries changes the set of learnable classes, and not just the sample or computational complexity. Next, we study the natural extension of the model to arbitrary domains. While we do not obtain an exact characterization, our techniques readily generalize and show that the same strict sandwiching persists. Finally, we show that $d$-dimensional halfspaces, which are not learnable without queries, are learnable with queries: we give a $\mathrm{poly}(d) \tilde{O}(1/ε)$ sample and $\mathrm{poly}(d) \mathrm{polylog}(1/ε)$ query algorithm, and prove that at least $Ω(d)$ samples or queries are necessary. To our knowledge, this is the first algorithm for halfspaces in Valiant's model. Together, these results uncover a surprisingly rich theory behind Valiant's original notion of learnability and introduce ideas that may be of independent interest in learning theory.