Journal article

Generalized Dual Dynamic Programming for Infinite Horizon Problems in Continuous State and Action Spaces

J Warrington, PN Beuchat, J Lygeros

IEEE Transactions on Automatic Control | IEEE | Published : 2019


We describe a nonlinear generalization of dual dynamic programming (DP) theory and its application to value function estimation for deterministic control problems over continuous state and action spaces, in a discrete-time infinite horizon setting. We prove, using a Benders-type argument leveraging the monotonicity of the Bellman operator, that the result of a one-stage policy evaluation can be used to produce nonlinear lower bounds on the optimal value function that are valid over the entire state space. These bounds contain terms reflecting the functional form of the system's costs, dynamics, and constraints. We provide an iterative algorithm that produces successively better approximation..

View full abstract

Citation metrics