The unstable formula theorem revisited release_e773b6bl5fbrhmu4hklqmkl4ma

by Maryanthe Malliaris, Shay Moran

Released as a article .

2022  

Abstract

We first prove that Littlestone classes, those which model theorists call stable, characterize learnability in a new statistical model: a learner in this new setting outputs the same hypothesis, up to measure zero, with probability one, after a uniformly bounded number of revisions. This fills a certain gap in the literature, and sets the stage for an approximation theorem characterizing Littlestone classes in terms of a range of learning models, by analogy to definability of types in model theory. We then give a complete analogue of Shelah's celebrated (and perhaps a priori untranslatable) Unstable Formula Theorem in the learning setting, with algorithmic arguments taking the place of the infinite.
In text/plain format

Archived Files and Locations

application/pdf  923.2 kB
file_tntz7vnlarclrky3n66d4utzku
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-12-09
Version   v1
Language   en ?
arXiv  2212.05050v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: ba869655-d5f5-4930-bf64-834fce73550b
API URL: JSON