Guo-Jun Qi, Charu Aggarwal, et al.
IEEE TPAMI
In this paper, we propose a novel adaptive step-size approach for policy gradient reinforcement learning. A new metric is defined for policy gradients that measures the effect of changes on average reward with respect to the policy parameters. Since the metric directly measures the effects on the average reward, the resulting policy gradient learning employs an adaptive step-size strategy that can effectively avoid falling into a stagnant phase from the complex structure of the average reward function with respect to the policy parameters. Two algorithms are derived with the metric as variants of ordinary and natural policy gradients. Their properties are compared with previously proposed policy gradients through numerical experiments with simple, but non-trivial, 3-state Markov Decision Processes (MDPs). We also show performance improvements over previous methods in on-line learning with more challenging 20-state MDPs. © 2010 Takamitsu Matsubara, Tetsuro Morimura, and Jun Morimoto.
Guo-Jun Qi, Charu Aggarwal, et al.
IEEE TPAMI
Yehuda Naveli, Michal Rimon, et al.
AAAI/IAAI 2006
Ankit Vishnubhotla, Charlotte Loh, et al.
NeurIPS 2023
Gang Liu, Michael Sun, et al.
ICLR 2025