Table of Contents
Preface ix
1 Introduction 1
1.1 Optimization view on mathematical models 1
1.2 NLP models, black-box versus explicit expression 3
2 Mathematical modeling, cases 7
2.1 Introduction 7
2.2 Enclosing a set of points 7
2.3 Dynamic decision strategies 10
2.4 A black box design; a sugar centrifugal screen 13
2.5 Design and factorial or quadratic regression 15
2.6 Nonlinear optimization in economic models 17
2.6.1 Spatial economic-ecological model 18
2.6.2 Neoclassical dynamic investment model for cattle ranching 19
2.6.3 Several optima in environmental economics 19
2.7 Parameter estimation, model calibration, nonlinear regression 20
2.7.1 Learning of neural nets seen as parameter estimation 24
2.8 Summary and discussion points 26
2.9 Exercises 27
3 NLP optimality conditions 31
3.1 Intuition with some examples 31
3.2 Derivative information 35
3.2.1 Derivatives 36
3.2.2 Directional derivative 36
3.2.3 Gradient 37
3.2.4 Second-order derivative 38
3.2.5 Taylor 40
3.3 Quadratic functions 41
3.4 Optimality conditions, no binding constraints 45
3.4.1 First-order conditions 45
3.4.2 Second-order conditions 46
3.5 Optimality conditions, binding constraints 48
3.5.1 Lagrange multiplier method 49
3.5.2 Karush-Kuhn-Tucker conditons 52
3.6 Convexity 54
3.6.1 First-Order conditions are sufficient 56
3.6.2 Local minimum point is global minimum point 57
3.6.3 Maximum point at the boundary of the feasible area 59
3.7 Summary and discussion points 60
3.8 Exercises 60
3.9 Appendix: Solvers for Examples 3.2 and 3.3 64
4 Goodness of optimization algorithms 67
4.1 Effectiveness and efficiency of algorithms 67
4.1.1 Effectiveness 68
4.1.2 Efficiency 69
4.2 Some basic algorithms and their goodness 70
4.2.1 Introduction 70
4.2.2 NLP local optimization: Bisection and Newton 71
4.2.3 Deterministic GO: Grid search, Piyavaskii-Shubert 74
4.2.4 Stochastic GO: PRS, Multistart, Simulated Annealing 78
4.3 Investigating algorithms 84
4.3.1 Characteristics 85
4.3.2 Comparison of algorithms 87
4.4 Summary and discussion points 88
4.5 Exercises 89
5 Nonlinear Programming algorithms 91
5.1 Introduction 91
5.1.1 General NLP problem 91
5.1.2 Algorithms 91
5.2 Minimizing functions of one variable 93
5.2.1 Bracketing 93
5.2.2 Bisection 94
5.2.3 Golden Section search 95
5.2.4 Quadratic interpolation 97
5.2.5 Cubic interpolation 99
5.2.6 Method of Newton 100
5.3 Algorithms not using derivative information 101
5.3.1 Method of Nelder and Mead 102
5.3.2 Method of Powell 105
5.4 Algorithms using derivative information 106
5.4.1 Steepest descent method 107
5.4.2 Newton method 108
5.4.3 Conjugate gradient method 109
5.4.4 Quasi-Newton method 111
5.4.5 Inexact line search 113
5.4.6 Trust region methods 115
5.5 Algorithms for nonlinear regression 118
5.5.1 Linear regression methods 118
5.5.2 Gauss-Newton and Levenberg-Marquardt 120
5.6 Algorithms for constrained optimization 121
5.6.1 Penalty and barrier function methods 121
5.6.2 Gradient projection method 125
5.6.3 Sequential quadratic programming 130
5.7 Summary and discussion points 131
5.8 Exercises 132
6 Deterministic GO algorithms 137
6.1 Introduction 137
6.2 Deterministic heuristic, DIRECT 138
6.2.1 Selection for refinement 139
6.2.2 Choice for sampling and updating rectangles 141
6.2.3 Algorithm and illustration 142
6.3 Stochastic models and response surfaces 144
6.4 Mathematical structures 147
6.4.1 Concavity 148
6.4.2 Difference of convex functions, d.c. 149
6.4.3 Lipschitz continuity and bounds on derivatives 150
6.4.4 Quadratic functions 154
6.4.5 Bilinear functions 155
6.4.6 Multiplicative and fractional functions 156
6.4.7 Interval arithmetic 158
6.5 Global Optimization branch and bound 159
6.6 Examples from nonconvex quadratic programming 161
6.6.1 Example concave quadratic programming 162
6.6.2 Example indefinite quadratic programming 163
6.7 Cutting planes 165
6.8 Summary and discussion points 168
6.9 Exercises 169
7 Stochastic GO algorithms 171
7.1 Introduction 171
7.2 Rondom sampling in higher dimensions 172
7.2.1 All volume to the boundary 172
7.2.2 Loneliness in high dimensions 173
7.3 PRS- and Multistart-based methods 174
7.3.1 Pure Random Search as benchmark 174
7.3.2 Multistart as benchmark 176
7.3.3 Clustering to save on local searches 178
7.3.4 Tunneling and filled functions 180
7.4 Ideal and real, PAS and Hit and Run 183
7.5 Population algorithms 187
7.5.1 Controlled Random Search and Raspberries 188
7.5.2 Genetic algorithms 191
7.5.3 Particle swarms 195
7.6 Summary and discussion points 197
7.7 Exercises 197
References 199
Index 205