Wavefront and caustic surfaces of refractive laser beam shaper
David L. Shealy, John A. Hoffnagle
SPIE Optical Engineering + Applications 2007
We state and analyze the first active learning algorithm that finds an ε{lunate}-optimal hypothesis in any hypothesis class, when the underlying distribution has arbitrary forms of noise. The algorithm, A2 (for Agnostic Active), relies only upon the assumption that it has access to a stream of unlabeled examples drawn i.i.d. from a fixed distribution. We show that A2 achieves an exponential improvement (i.e., requires only O (ln frac(1, ε{lunate})) samples to find an ε{lunate}-optimal classifier) over the usual sample complexity of supervised learning, for several settings considered before in the realizable case. These include learning threshold classifiers and learning homogeneous linear separators with respect to an input distribution which is uniform over the unit sphere. © 2008 Elsevier Inc. All rights reserved.
David L. Shealy, John A. Hoffnagle
SPIE Optical Engineering + Applications 2007
L Auslander, E Feig, et al.
Advances in Applied Mathematics
John R. Kender, Rick Kjeldsen
IEEE Transactions on Pattern Analysis and Machine Intelligence
Leo Liberti, James Ostrowski
Journal of Global Optimization