Design of Ordinal Optimization and Simulation-Based Policy Iteration
Date Issued
2005
Date
2005
Author(s)
Chang, Ming Chen
DOI
en-US
Abstract
Stationary MDP is an important class of stochastic optimal control problems, and there are lots of applications in economic, computer, and communication, manufacturing systems. In the real problem, the transition cost is uncertain and complex. It can only be achieved via Monte-Carlo simulation. For the problems with complex transition cost, simulation-based policy iteration has been developed to solve them.
SBPI includes a sequence of policy evaluation and policy improvement steps. The policy evaluation step is to estimate the one-step-analysis cost-to-go with the given policy via simulation. We stop doing simulation until the estimation reaches a specified accuracy level. The policy improvement step is to update the policy by selecting the best control among all admissible controls of every state according to the estimated one-step-analysis cost-to-go and compose all the selected (state, control) pair to form an improved policy. The policy evaluation step and policy improvement step iterates until the improved policy is totally the same to the policy we have at present. The simulation result shows that policy evaluation step is most time-consuming step in SBPI.
Based on the observation from the above experiment, to quickly select an optimal policy, in this thesis, we combine the concept of OO with SBPI and propose an OOBPI algorithm. In the traditional SBPI approach, we select the optimal control after obtaining an accurate estimation of one-step-analysis cost-to-go. But OOBPI compares the relative orders of one-step-analysis cost-to-go among admissible controls of each state to a specified level of confidence. And the improved policy is composed of the selected best control in every state. This algorithm use the concept that the convergence rate of relative ranking is much faster than the convergence rate of performance estimation to reduce the simulation time in policy evaluation step.
Based on the convergence sufficient conditions proposed by Cooper and Henderson, we find a set of numbers which fits the sufficient condition. We combine the approach proposed by C.H. Chen to find a lower bound of satisfactory confidence level and provide a sufficient condition about how to set up the lowest satisfactory confidence level at every iteration to make sure that OOBPI will converge to optimal policy with probability 1.
Comparing with the replacement model and the model implemented in SBPI, we find OOBPI could efficiently find a good enough policy with the same reasonable and good enough optimal policy accuracy. When the model size increases, OOBPI can also find an optimal policy in a shorter CPU time..
Subjects
碼可夫決策問題
排序佳化
蒙地卡羅模擬
Monte-Carlo Simulation
Ordinal Optimization
Markov Decision Process
Type
thesis
File(s)![Thumbnail Image]()
Loading...
Name
ntu-94-R91921057-1.pdf
Size
23.31 KB
Format
Adobe PDF
Checksum
(MD5):c4288df11364073dc804f56b3d1d1a00
