图书盲袋,以书为“药”
欢迎光临中图网 请 | 注册
> >
策略前展、策略迭代与分布式强化学习

策略前展、策略迭代与分布式强化学习

出版社:清华大学出版社出版时间:2022-04-01
开本: 16开 页数: 483
中 图 价:¥116.8(8.4折) 定价  ¥139.0 登录后可看到会员价
加入购物车 收藏
运费6元,满39元免运费
?新疆、西藏除外
本类五星书更多>

策略前展、策略迭代与分布式强化学习 版权信息

  • ISBN:9787302599388
  • 条形码:9787302599388 ; 978-7-302-59938-8
  • 装帧:一般胶版纸
  • 册数:暂无
  • 重量:暂无
  • 所属分类:>

策略前展、策略迭代与分布式强化学习 本书特色

读者通过本书可以了解强化学习中策略迭代,特别是Rollout方法在分布式和多智能体框架下的新进展和应用。本书可用作人工智能或系统与控制科学等相关专业的高年级本科生或研究生作为一个学期的课程教材。也适用于开展相关研究工作的专业技术人员作为参考书阅读。

策略前展、策略迭代与分布式强化学习 内容简介

本书主要内容:第1章为动态规划原理;第2章为策略前展与策略改进;第3章为专用策略前展算法;第4章为值和策略的学习;第5章为无限时间分布式和多智能体算法。 横空出世的围棋软件AlphaZero算法对本书有很大影响。本书内容同样基于策略迭代、值网络和策略网络的神经网络近似表示、并行与分布式计算和前瞻*小化约简技术的核心框架构建,并对算法的适用范围做了拓展。本书的特色在于给出了分布式计算和多智能体系统框架下的强化学习策略改进计算的效率提升技术,建立了一步策略改进策略前展方法同控制系统中广泛使用的模型预测控制(MPC)设计方法之间的联系,并描述了策略前展方法在复杂离散和组合优化问题方面的应用。 通过阅读本书,读者可以了解强化学习中的策略迭代,特别是策略前展方法在分布式和多智能体框架下的近期新进展和应用。本书可用作人工智能或系统与控制科学等相关专业的高年级本科生或研究生的教材,也适合开展相关研究工作的专业技术人员作为参考书。

策略前展、策略迭代与分布式强化学习 目录

1 Exact and Approximate Dynamic Programming Principles
1.1 AlphaZero, Off-Line Training, and On-Line Play
1.2 Deterministic Dynamic Programming
1.2.1 Finite Horizon Problem Formulation
1.2.2 The Dynamic Programming Algorithm
1.2.3 Approximation in Value Space
1.3 Stochastic Dynamic Programming
1.3.1 Finite Horizon Problems
1.3.2 Approximation in Value Space for Stochastic DP
1.3.3 Infinite Horizon Problems-An Overview
1.3.4 Infinite Horizon-Approximation in Value Space
1.3.5 Infinite Horizon-Policy Iteration, Rollout, andNewton's Method
1.4 Examples, Variations, and Simplifications
1.4.1 A Few Words About Modeling
1.4.2 Problems with a Termination State
1.4.3 State Augmentation, Time Delays, Forecasts, and Uncontrollable State Components
1.4.4 Partial State Information and Belief States
1.4.5 Multiagent Problems and Multiagent Rollout
1.4.6 Problems with Unknown Parameters-AdaptiveControl
1.4.7 Adaptive Control by Rollout and On-LineReplanning
1.5 Reinforcement Learning and Optimal Control-SomeTerminology
1.6 Notes and Sources
2 General Principles of Approximation in Value Space
2.1 Approximation in Value and Policy Space
2.1.1 Approximation in Value Space-One-Step and Multistep Lookahead
2.1.2 Approximation in Policy Space
2.1.3 Combined Approximation in Value and Policy Space
2.2 Approaches for Value Space Approximation
2.2.1 Off-Line and On-Line Implementations
2.2.2 Model-Based and Model-Free Implementations
2.2.3 Methods for Cost-to-Go Approximation
2.2.4 Methods for Expediting the Lookahead Minimization
2.3 Deterministic Rollout and the Policy Improvement Principle
2.3.1 On-Line Rollout for Deterministic Discrete Optimization
2.3.2 Using Multiple Base Heuristics-Parallel Rollout
2.3.3 The Simplified Rollout Algorithm
2.3.4 The Fortified Rollout Algorithm
2.3.5 Rollout with Multistep Lookahead
2.3.6 Rollout with an Expert
2.3.7 Rollout with Small Stage Costs and Long Horizon-Continuous-Time Rollout
2.4 Stochastic Rollout and Monte Carlo Tree Search
2.4.1 Simulation-Based Implementation of the Rollout Algorithm
2.4.2 Monte Carlo Tree Search
2.4.3 Randomized Policy Improvement by Monte Carlo Tree Search
2.4.4 The Effect of Errors in Rollout-Variance Reduction
2.4.5 Rollout Parallelization
2.5 Rollout for Infinite-Spaces Problems-Optimization Heuristics
2.5.1 Rollout for Infinite-Spaces Deterministic Problems
2.5.2 Rollout Based on Stochastic Programming
2.6 Notes and Sources
3 Specialized Rollout Algorithms
3.1 Model Predictive Control
3.1.1 Target Tubes and Constrained Controllability
3.1.2 Model Predictive Control with Terminal Cost
3.1.3 Variants of Model Predictive Control
3.1.4 Target Tubes and State-Constrained Rollout
3.2 Multiagent Rollout
3.2.1 Asynchronous and Autonomous Multiagent Rollout
3.2.2 Multiagent Coupling Through Constraints
3.2.3 Multiagent Model Predictive Control
3.2.4 Separable and Multiarmed Bandit Problems
3.3 Constrained Rollout-Deterministic Optimal Control
3.3.1 Sequential Consistency, Sequential Improvement, and the Cost Improvement Property
3.3.2 The Fortified Rollout Algorithm and Other Variations
3.4 Constrained Rollout-Discrete Optimization
3.4.1 General Discrete Optimization Problems
3.4.2 Multidimensional Assignment
3.5 Rollout for Surrogate Dynamic Programming and Bayesian Optimization
3.6 Rollout for Minimax Control
3.7 Notes and Sources
4 Learning Values and Policies
4.1 Parametric Approximation Architectures
4.1.1 Cost Function Approximation
4.1.2 Feature-Based Architectures
4.1.3 Training of Linear and Nonlinear Architectures
4.2 Neural Networks
4.2.1 Training of Neural Networks
4.2

展开全部

策略前展、策略迭代与分布式强化学习 作者简介

Dimitri P. Bertsekas,德梅萃 P.博塞克斯(Dimitri P. Bertseka),美国MIT终身教授,美国国家工程院院士,清华大学复杂与网络化系统研究中心客座教授。电气工程与计算机科学领域国际知名作者,著有《非线性规划》《网络优化》《动态规划》《凸优化》《强化学习与最优控制》等十几本畅销教材和专著。

商品评论(0条)
暂无评论……
书友推荐
本类畅销
编辑推荐
返回顶部
中图网
在线客服