读者节开场福利
欢迎光临中图网 请 | 注册

数值最优化

作者:劳斯特
出版社:科学出版社出版时间:2016-07-09
开本: 16开 页数: 636
本类榜单:自然科学销量榜
中 图 价:¥148.5(7.5折) 定价  ¥198.0 登录后可看到会员价
暂时缺货 收藏
运费6元,满39元免运费
?新疆、西藏除外
本类五星书更多>

数值最优化 版权信息

  • ISBN:9787030166753
  • 条形码:9787030166753 ; 978-7-03-016675-3
  • 装帧:一般胶版纸
  • 册数:暂无
  • 重量:暂无
  • 所属分类:>>

数值最优化 本书特色

适读人群 :数学专业高年级本科生,运筹学、应用数学等相关专业研究生 本书运筹学、计算数学高年级本科生或研究生必读书目,是数之**化一部经典之作。

数值最优化 内容简介

  本书作者现任美国西北大学教授,多种国际杂志的主编、副主编。作者根据在教学、研究和咨询中的经验,写了这本适合学生和实际工作者的书。本书提供连续优化中大多数有效方法的全面的新的论述。每一章从基本概念开始,逐步阐述当前可用的技术。  本书强调实用方法,包含大量图例和练习,适合广大读者阅读,可作为工程、运筹学、数学、计算机科学以及商务方面的研究生教材,也可作为该领域的科研人员和实际工作人员的手册。  总之,作者力求本书阅读性强,内容丰富,论述严谨,能揭示数值实用价值。

数值最优化 目录

Preface
1 Introduction
Mathematical Formulation
Example: A Transportation Problem
Continuous versus Discrete Optimization
Constrained and Unconstrained Optimization
Global and Local Optimization
Stochastic and Deterministic Optimization
Optimization Algorithms
Convexity
Notes and References

2 Fundamentals of Unconstrained Optimization
2.1 What Is a Solution?
Recognizing a Local Minimum
Nonsmooth Problems
2.2 Overview of Algorithms
Two Strategies: Line Search and Trust Region
Search Directions for Line Search Methods
Models for Trust—Region Methods
Scaling
Rates of Convergence
R—Rates of Convergence
Notes and References
Exercises

3 Line Search Methods
3.1 Step Length
The Wolfe Conditions
The Goldstein Conditions
Sufficient Decrease and Backtracking
3.2 Convergence of Line Search Methods
3.3 Rate of Convergence
Convergence Rate of Steepest Descent
Quasi—Newton Methods
Newton's Method
Coordinate Descent Methods
3.4 Step—Length Selection Algorithms
Interpolation
The Initial Step Length
A Line Search Algorithm for the Wolfe Conditions
Notes and References
Exerases

4 Trust—Region Methods
Outline of the Algorithm
4.1 The Cauchy Point and Related Algorithms
The Cauchy Point
Improving on the Cauchy Point
The DoglegMethod
Two—Dimensional Subspace Minimization
Steihaug's Approach
4.2 Using Nearly Exact Solutions to the Subproblem
Characterizing Exact Solutions
Calculating Nearly Exact Solutions
The Hard Case
Proof of Theorem 4.3
4.3 Global Convergence
Reduction Obtained by the Cauchy Point
Convergence to Stationary Points
Convergence of Algorithms Based on Nearly Exact Solutions
4.4 Other Enhancements
Scaling
Non—Euclidean Trust Regions
Notes and References
Exercises

5 Conjugate Gradient Methods
5.1 The Linear Conjugate Gradient Method
Conjugate Direction Methods
Basic Properties of the Conjugate Gradient Method
A Practical Form of the Conjugate Gradient Method
Rate of Convergence
Preconditioning
Practical Preconditioners
5.2 Nonlinear Conjugate Gradient Methods
The Fletcher—Reeves Method
The Polak—Ribiere Method
Quadratic Termination and Restarts
Numerical Performance
Behavior of the Fletcher—Reeves Method
Global Convergence
Notes and References
Exerases

6 Practical Newton Methods
6.1 Inexact Newton Steps
6.2 Line Search Newton Methods
Line Search Newton—CG Method
Modified Newton's Method
6.3 Hessian Modifications
Eigenvalue Modification
Adding a Multiple of the Identity
Modified Cholesky Factorization
Gershgorin Modification
Modified Symmetric Indefinite Factorization
6.4 Trust—Region Newton Methods
Newton—Dogleg and Subspace—Minimization Methods
Accurate Solution of the Trust—Region Problem
Trust—Region Newton—CG Method
Preconditioning the Newton—CG Method
Local Convergence of Trust—Region Newton Methods
Notes and References
Exerases

7 Calculating Derivatives
7.1 Finite—Difference Derivative Approximations
Approximating the Gradient
Approximating a Sparse Jacobian
Approximatingthe Hessian
Approximating a Sparse Hessian
7.2 Automatic Differentiation
An Example
The Forward Mode
The Reverse Mode
Vector Functions and Partial Separability
Calculating Jacobians of Vector Functions
Calculating Hessians: Forward Mode
Calculating Hessians: Reverse Mode
Current Limitations
Notes and References
Exercises

8 Quasi—Newton Methods
8.1 The BFGS Method
Properties ofthe BFGS Method
Implementation
8.2 The SR1 Method
Properties of SRl Updating
8.3 The Broyden Class
Properties ofthe Broyden Class
8.4 Convergence Analysis
Global Convergence ofthe BFGS Method
Superlinear Convergence of BFGS
Convergence Analysis of the SR1 Method
Notes and References
Exercises

9 Large—Scale Quasi—Newton and Partially Separable Optimization
9.1 Limited—Memory BFGS
Relationship with Conjugate Gradient Methods
9,2 General Limited—Memory Updating
Compact Representation of BFGS Updating
SR1 Matrices
Unrolling the Update
9.3 Sparse Quasi—Newton Updates
9.4 Partially Separable Functions
A Simple Example
Internal Variables
9.5 Invariant Subspaces and Partial Separability
Sparsity vs.Partial Separability
Group Partial Separability
9.6 Algorithms for Partially Separable Functions
Exploiting Partial Separabilityin Newton's Method
Quasi—Newton Methods for Partially Separable Functions
Notes and References
Exercises
……
10 Nonlinear Least—Squares Problems
11 Nonlinear Equations
12 Theory of Constrained Optimization
13 Linear Programming: The Simplex Method
14 Linear Programming:Interior—Point Methods
15 Fundamentals of Algorithms for Nonlinear Constrained Optimization
16 Quadratic Programnung
17 Penalty, Barrier, and Augmented Lagrangian Methods
18 Sequential Quadratic Programming
A Background Material
References
Index
展开全部

数值最优化 作者简介

作者现任美国西北大学教授,多种国际**杂志的主编、副主编。作者根据在教学、研究和咨询中的经验,写了这本适合学生和实际工作者的书。

商品评论(0条)
暂无评论……
书友推荐
编辑推荐
返回顶部
中图网
在线客服