Girmaw Abebe Tadesse, Celia Cintas, et al.
ICML 2020
Building a robust predictive model requires an array of steps such as data imputation, feature transformations, estimator selection, hyper-parameter search, ensemble construction, amongst others. Due to this vast, complex and heterogeneous space of operations, off-the-shelf optimization methods offer infeasible solutions for realistic response time requirements. In practice, much of the predictive modeling process is conducted by experienced data scientists, who selectively make use of available tools. Over time, they develop an understanding of the behavior of operators, and perform serial decision making under uncertainty, colloquially referred to as educated guesswork. With an unprecedented demand for application of supervised machine learning, there is a call for solutions that automatically search for a suitable combination of operators across these tasks while minimize the modeling error. We introduce a novel system called APRL (Autonomous Predictive modeler via Reinforcement Learning), that uses past experience through reinforcement learning to optimize sequential decision making from within a set of diverse actions under a budget constraint. Our experiments demonstrate the superiority of the proposed approach over known AutoML systems that utilize Bayesian optimization or genetic algorithms.
Girmaw Abebe Tadesse, Celia Cintas, et al.
ICML 2020
Jayaraman J. Thiagarajan, Bindya Venkatesh, et al.
AAAI 2020
Raúl Fernández Díaz, Lam Thanh Hoang, et al.
IRB-AI-DD 2025
Nicholas Heller, Angelica Bartholomew, et al.
Urologic Oncology: Seminars And Original Investigations