Seung Gu Kang, Jeff Weber, et al.
ACS Fall 2023
In this study, we discuss a baseline function for the estimation of a natural policy gradient with respect to variance, and demonstrate a condition in which an optimal baseline function that reduces the variance is equivalent to the state value function. However, outside of this condition, the state value could be considerably different from the optimal baseline. For such cases, an extended version of the NTD algorithm is proposed, where an auxiliary function is estimated to adjust the baseline, being state value estimates in the original NTD version, to the optimal baseline. The proposed algorithm is applied to simple MDPs and a challenging pendulum swing-up problem. © International Symposium on Artificial Life and Robotics (ISAROB). 2008.
Seung Gu Kang, Jeff Weber, et al.
ACS Fall 2023
Shyam Marjit, Harshit Singh, et al.
WACV 2025
Benjamin N. Grosof
AAAI-SS 1993
Gaku Yamamoto, Hideki Tai, et al.
AAMAS 2008