Khalid Abdulla, Andrew Wirth, et al.
ICIAfS 2014
We extend existing theory on stability, namely how much changes in the training data influence the estimated models, and generalization performance of deterministic learning algorithms to the case of randomized algorithms. We give formal definitions of stability for randomized algorithms and prove non-asymptotic bounds on the difference between the empirical and expected error as well as the leave-one-out and expected error of such algorithms that depend on their random stability. The setup we develop for this purpose can be also used for generally studying randomized learning algorithms. We then use these general results to study the effects of bagging on the stability of a learning method and to prove non-asymptotic bounds on the predictive performance of bagging which have not been possible to prove with the existing theory of stability for deterministic learning algorithms.
Khalid Abdulla, Andrew Wirth, et al.
ICIAfS 2014
Amy Lin, Sujit Roy, et al.
AGU 2024
Ora Nova Fandina, Eitan Farchi, et al.
AAAI 2026
Rangachari Anand, Kishan Mehrotra, et al.
IEEE Transactions on Neural Networks