Journal article

Unsupervised Basis Function Adaptation for Reinforcement Learning

Edward Barker, Charl Ras

JOURNAL OF MACHINE LEARNING RESEARCH | MICROTOME PUBL | Published : 2019

Abstract

When using reinforcement learning (RL) algorithms it is common, given a large state space, to introduce some form of approximation architecture for the value function (VF). The exact form of this architecture can have a significant effect on an agent's performance, however, and determining a suitable approximation architecture can often be a highly complex task. Consequently there is currently interest among researchers in the potential for allowing RL algorithms to adaptively generate (i.e. to learn) approximation architectures. One relatively unexplored method of adapting approximation architectures involves using feedback regarding the frequency with which an agent has visited certain sta..

View full abstract

University of Melbourne Researchers