Gosia Lazuka, Andreea Simona Anghel, et al.
SC 2024
In this study, we discuss a baseline function for the estimation of a natural policy gradient with respect to variance, and demonstrate a condition in which an optimal baseline function that reduces the variance is equivalent to the state value function. However, outside of this condition, the state value could be considerably different from the optimal baseline. For such cases, an extended version of the NTD algorithm is proposed, where an auxiliary function is estimated to adjust the baseline, being state value estimates in the original NTD version, to the optimal baseline. The proposed algorithm is applied to simple MDPs and a challenging pendulum swing-up problem. © International Symposium on Artificial Life and Robotics (ISAROB). 2008.
Gosia Lazuka, Andreea Simona Anghel, et al.
SC 2024
Fahiem Bacchus, Joseph Y. Halpern, et al.
IJCAI 1995
Benjamin N. Grosof
AAAI-SS 1993
Kenneth L. Clarkson, Elad Hazan, et al.
Journal of the ACM