Hybrid reinforcement learning with expert state sequences
Xiaoxiao Guo, Shiyu Chang, et al.
AAAI 2019
Value function approximation methods have been successfully used in many applications, but the prevailing techniques often lack useful a priori error bounds. We propose a new approximate bilinear programming formulation of value function approximation, which employs global optimization. The formulation provides strong a priori guarantees on both robust and expected policy loss by minimizing specific norms of the Bellman residual. Solving a bilinear program optimally is NP-hard, but this worst-case complexity is unavoidable because the Bellman-residual minimization itself is NP-hard. We describe and analyze the formulation as well as a simple approximate algorithm for solving bilinear programs. The analysis shows that this algorithm offers a convergent generalization of approximate policy iteration. We also briefly analyze the behavior of bilinear programming algorithms under incomplete samples. Finally, we demonstrate that the proposed approach can consistently minimize the Bellman residual on simple benchmark problems. © 2011 Marek Petrik and Shlomo Zilberstein.
Xiaoxiao Guo, Shiyu Chang, et al.
AAAI 2019
Arnold L. Rosenberg
Journal of the ACM
Harsha Kokel, Aamod Khatiwada, et al.
VLDB 2025
Michael Hersche, Mustafa Zeqiri, et al.
NeSy 2023