Simulation-based Algorithms for Markov Decision Processes

Simulation-based Algorithms for Markov Decision Processes

-

English

Description

Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. It is well-known that many real-world problems modeled by MDPs have huge state and/or action spaces, leading to the notorious curse of dimensionality that makes practical solution of the resulting models intractable. In other cases, the system of interest is complex enough that it is not feasible to specify some of the MDP model parameters explicitly, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based numerical algorithms have been developed recently to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function. Specific approaches include:


• multi-stage adaptive sampling;


• evolutionary policy iteration;


• evolutionary random policy search; and


• model reference adaptive search.


Simulation-based Algorithms for Markov Decision Processes brings this state-of-the-art research together for the first time and presents it in a manner that makes it accessible to researchers with varying interests and backgrounds. In addition to providing numerous specific algorithms, the exposition includes both illustrative numerical examples and rigorous theoretical convergence results. The algorithms developed and analyzed differ from the successful computational methods for solving MDPs based on neuro-dynamic programming or reinforcement learning and will complement work in those areas. Furthermore, the authors show how to combine the various algorithms introduced with approximate dynamic programming methods that reduce the size of the state space and ameliorate the effects of dimensionality.


The self-contained approach of this book will appeal not only to researchers in MDPs, stochastic modeling and control, and simulation but will be a valuable source of instruction and reference for students of control and operations research.

Subjects

Informations

Published by
Published 01 May 2007
Reads 10
EAN13 9781846286902
License: All rights reserved
Language English
Report a problem
Contents
Selected Notation and Abbreviationsxv. . . . . . . . . . . . . . . . . . . . . . . . . .
1
2
3
Markov Decision Processes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Optimality Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Policy Iteration and Value Iteration . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Rolling-horizon Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Survey of Previous Work on Computational Methods . . . . . . . . 1.5 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Preview of Coming Attractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Multistage Adaptive Sampling Algorithms. . . . . . 2.1 Upper Confidence Bound Sampling . . . . . . . . . . . . . . 2.1.1 Regret Analysis in Multi-armed Bandits . . . . 2.1.2 Algorithm Description . . . . . . . . . . . . . . . . . . . 2.1.3 Alternative Estimators . . . . . . . . . . . . . . . . . . . 2.1.4 Convergence Analysis . . . . . . . . . . . . . . . . . . . . 2.1.5 Numerical Example . . . . . . . . . . . . . . . . . . . . . . 2.2 Pursuit Learning Automata Sampling . . . . . . . . . . . . 2.2.1 Algorithm Description . . . . . . . . . . . . . . . . . . . 2.2.2 Convergence Analysis . . . . . . . . . . . . . . . . . . . . 2.2.3 Application to POMDPs . . . . . . . . . . . . . . . . . 2.2.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . 2.3 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
Populationbased Evolutionary Approaches. . . . . . . . . . . . . . . . 3.1 Evolutionary Policy Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Policy Switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Policy Mutation and Population Generation . . . . . . . . . . 3.1.3 Stopping Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.4 Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 3 5 7 8 11 14 15
17 19 19 20 21 24 31 40 41 42 51 53 59
61 63 63 65 65 66
xii
4
5
Contents
3.2
3.3
3.4 3.5
3.1.5 Parallelization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evolutionary Random Policy Search . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Policy Improvement with Reward Swapping . . . . . . . . . . 3.2.2 Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 A One-dimensional Queueing Example . . . . . . . . . . . . . . . 3.3.2 A Two-dimensional Queueing Example . . . . . . . . . . . . . . . Extension to Simulation-based Setting . . . . . . . . . . . . . . . . . . . . . Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67 67 68 71 73 76 76 85 87 87
Model Reference Adaptive Search. . . . . . . . . . . . . . . . . . . . . . . . . 89 The Model Reference Adaptive Search Method . . . . . . . . . . . . . . 91 4.1.1 The MRAS093. . . . . . . . . . . Algorithm (Idealized Version) 4.1.2 The MRAS196Algorithm (Adaptive Monte Carlo Version) 4.1.3 The MRAS2. . . 98Algorithm (Stochastic Optimization) . . Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.2.1 MRAS0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101Convergence . 4.2.2 MRAS1Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.2.3 MRAS2Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Application to MDPs via Direct Policy Learning . . . . . . . . . . . . 129 4.3.1 Finite-horizon MDPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 4.3.2 Infinite-horizon MDPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 4.3.3 MDPs with Large State Spaces . . . . . . . . . . . . . . . . . . . . . . 132 4.3.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Application to Infinite-horizon MDPs in Population-based Evolutionary Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 4.4.1 Algorithm Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 4.4.2 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Application to Finite-horizon MDPs Using Adaptive Sampling 146 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.1 4.2 4.3 4.4 4.5 4.6
Online Control Methods via Simulation. . . . . . . . . . . . . . . . . . . 149 5.1 Simulated Annealing Multiplicative Weights Algorithm . . . . . . . 153 5.1.1 Basic Algorithm Description . . . . . . . . . . . . . . . . . . . . . . . . 154 5.1.2 Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5.1.3 Convergence of the Sampling Version of the Algorithm . 158 5.1.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 5.1.5 Simulated Policy Switching . . . . . . . . . . . . . . . . . . . . . . . . . 164 5.2 Rollout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 5.2.1 Parallel Rollout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 5.3 Hindsight Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 5.3.1 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 5.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Contents
xiii
References. . . . . . . . . . . . . . . . . . . . . . . 177. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
http://www.springer.com/978-1-84628-689-6