The unstable formula theorem revisited release_e773b6bl5fbrhmu4hklqmkl4ma
by Maryanthe Malliaris, Shay Moran
2022
Abstract
We first prove that Littlestone classes, those which model theorists call stable, characterize learnability in a new statistical model: a learner in this new setting outputs the same hypothesis, up to measure zero, with probability one, after a uniformly bounded number of revisions. This fills a certain gap in the literature, and sets the stage for an approximation theorem characterizing Littlestone classes in terms of a range of learning models, by analogy to definability of types in model theory. We then give a complete analogue of Shelah's celebrated (and perhaps a priori untranslatable) Unstable Formula Theorem in the learning setting, with algorithmic arguments taking the place of the infinite.
In text/plain
format
Archived Files and Locations
application/pdf 923.2 kB file_tntz7vnlarclrky3n66d4utzku |
arxiv.org (repository) web.archive.org (webarchive) |
2212.05050v1
access all versions, variants, and formats of this works (eg, pre-prints)