Repository logo
  • English
  • 中文
Log In
Have you forgotten your password?
  1. Home
  2. College of Electrical Engineering and Computer Science / 電機資訊學院
  3. Electrical Engineering / 電機工程學系
  4. Design of Ordinal Optimization-Based Value Iteration with Computing Budget Allocation
 
  • Details

Design of Ordinal Optimization-Based Value Iteration with Computing Budget Allocation

Date Issued
2005
Date
2005
Author(s)
Ho, Yuan-Hsiang
DOI
en-US
URI
http://ntur.lib.ntu.edu.tw//handle/246246/53238
Abstract
In many stationary Markov Decision Problems (StMDPs) modeling of real problems: for instance, inventory control problems, computer, and communication networks, both the transition probability and cost function must be generated via computer simulation because of problem complexity and uncertainty. Simulation-Based Policy Iteration (SBPI) is a typical solution in these problems. SBPI includes a sequence of policy evaluation and policy improvement steps. In Policy evaluation step, we evaluate cost-to go value for each state via simulation of multi-stages. It is the most time-consuming step in SBPI algorithm. It means that if we want to decrease CPU time with a good optimal policy accuracy (PA), we have to shorten the CPU time in policy evaluation step Observation of simulation experiment of SBPI is rough estimation accuracy of CTG may lead to good enough policy but much less simulation time. Though the value estimation is rough, policy accuracy improves gradually with iteration process. This motivates our search and idea of improvement approach of algorithm. In this paper, we first propose Simulation-Based Value Iteration (SBVI) to solve StMDPs. In policy evaluation step compared with SBPI, SBVI evaluates stage-wise cost only to evaluate stage-wise cost and adds estimation of CTG in previous iteration to update the values in current iteration. Estimation of CTG is rougher than SBPI in early iteration but still leads to good enough policy and spends less simulation time. And policy accuracy will approach or reach optimal policy with iteration. In the numerical study that compares SBPI with SBVI by a medium dimension problem, simulation time can be saved around two orders in SBVI than SBPI to get the same level policy accuracy. However, simulation time of SBVI in high range of PA rapidly grows with PA and problem dimensions. We further exploit the property that ranking of optimal policy has formed, although estimation accuracy is rough in advance. We propose Ordinal Optimization-Based Value Iteration (OOBVI) by using the concept of OO. OOBVI adopts ranking accuracy (APCS) instead of estimation accuracy as stopping criterion to stop simulation of policy evaluation. APCS can modulate simulation replications with iteration to save simulation time at the same desirable policy accuracy. According to simulation result of a medium dimension problem, OOBVI can save four times simulation time compared with SBVI to reach the same PA in SBVI. And from observation of further simulation, growth of simulation time in OOBVI is approximated by linear but exponential in SBVI to get the same desirable PA. We anticipate that OOBVI is more efficient than SBVI in large dimension problems. OOBVI uses the same ranking accuracy for all states in policy evaluation step, but we consider that it is unnecessary. The innovative idea is variable stopping criterion for states. Combination of computing budget allocation over states (CBA-S) and OOBVI can be expected to get high PA for high stopping criterion but simulation time decreases obviously. By a medium dimension problem, OOBVI with CBA-S can get almost the same PA for high stopping criterion but simulation time saves tenfold. Summary of contribution and value of our research is improvement of SBPI which is typical solution to solve StMDP that needs simulation. From our finite simulation experiment, we expect that OOBVI with CBA-S is most potential algorithm to get the same PA but less simulation time.
Subjects
演算法
馬可夫決策
排序佳化
MDP
markov decision proess
ordinal optimazation
Type
thesis
File(s)
Loading...
Thumbnail Image
Name

ntu-94-R92921007-1.pdf

Size

23.31 KB

Format

Adobe PDF

Checksum

(MD5):ca270bbc47e28ba034ae396f694e0ef5

臺大位居世界頂尖大學之列,為永久珍藏及向國際展現本校豐碩的研究成果及學術能量,圖書館整合機構典藏(NTUR)與學術庫(AH)不同功能平台,成為臺大學術典藏NTU scholars。期能整合研究能量、促進交流合作、保存學術產出、推廣研究成果。

To permanently archive and promote researcher profiles and scholarly works, Library integrates the services of “NTU Repository” with “Academic Hub” to form NTU Scholars.

總館學科館員 (Main Library)
醫學圖書館學科館員 (Medical Library)
社會科學院辜振甫紀念圖書館學科館員 (Social Sciences Library)

開放取用是從使用者角度提升資訊取用性的社會運動,應用在學術研究上是透過將研究著作公開供使用者自由取閱,以促進學術傳播及因應期刊訂購費用逐年攀升。同時可加速研究發展、提升研究影響力,NTU Scholars即為本校的開放取用典藏(OA Archive)平台。(點選深入了解OA)

  • 請確認所上傳的全文是原創的內容,若該文件包含部分內容的版權非匯入者所有,或由第三方贊助與合作完成,請確認該版權所有者及第三方同意提供此授權。
    Please represent that the submission is your original work, and that you have the right to grant the rights to upload.
  • 若欲上傳已出版的全文電子檔,可使用Open policy finder網站查詢,以確認出版單位之版權政策。
    Please use Open policy finder to find a summary of permissions that are normally given as part of each publisher's copyright transfer agreement.
  • 網站簡介 (Quickstart Guide)
  • 使用手冊 (Instruction Manual)
  • 線上預約服務 (Booking Service)
  • 方案一:臺灣大學計算機中心帳號登入
    (With C&INC Email Account)
  • 方案二:ORCID帳號登入 (With ORCID)
  • 方案一:定期更新ORCID者,以ID匯入 (Search for identifier (ORCID))
  • 方案二:自行建檔 (Default mode Submission)
  • 方案三:學科館員協助匯入 (Email worklist to subject librarians)

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science