Section 2.7.1.0 - Basic Writeup
Section 2.7.1.0.1 - Part 1 - Katariya - Introduction to Optimization
Section 2.7.1.0.2 - Part 2 - Katariya - Local vs Global Optimization
Section 2.7.1.1 - Basic Concepts
Section 2.7.1.1.1 - Active set
Section 2.7.1.1.2 - Candidate solution
Section 2.7.1.1.3 - Constraint (mathematics)
Section 2.7.1.1.3.0 - Basic Writeup - Wikipedia - Constraint (mathematics)
Section 2.7.1.1.3.1 - Constrained optimization
Section 2.7.1.1.3.2 - Binary constraint
Section 2.7.1.1.4 - Corner solution
Section 2.7.1.1.5 - Feasible region
Section 2.7.1.1.6 - Global optimum
Section 2.7.1.1.7 - Local optimum
Section 2.7.1.1.8 - Maxima and minima
Section 2.7.1.1.9 - Slack variable
Section 2.7.1.1.10 - Continuous optimization
Section 2.7.1.1.11 - Discrete optimization
Section 2.7.1.2 - Major Sub Fields
Section 2.7.1.2.1 - Single Objective Optimization
Section 2.7.1.2.1.0 - Basic Writeup - Wikipedia - Mathematical Optimization SubFields
Section 2.7.1.2.1.1 - Convex Optimization
Section 2.7.1.2.1.1.0 - Basic Writeup - Wikipedia - Convex Optimization
Section 2.7.1.2.1.1.1 - Complexity Theory in Convex Optimization
Section 2.7.1.2.1.1.2 - Convex Optimization Techniques
Section 2.7.1.2.1.1.2.1 - Linear programming (LP)
Section 2.7.1.2.1.1.2.2 - Conic Optimization
Section 2.7.1.2.1.1.2.2.0 - Basic Writeup - Wikipedia - Conic Optimization
Section 2.7.1.2.1.1.2.2.1 - Second Order Cone Programming ( SOCP )
Section 2.7.1.2.1.1.2.2.2 - Semi definite programming (SDP)
Section 2.7.1.2.1.1.2.2.3 - Sum of Squares Optimization
Section 2.7.1.2.1.1.2.2.4 - Quadratic Programming
Section 2.7.1.2.1.1.2.2.5 - Quadratically Constrained Quadratic Programming
Section 2.7.1.2.1.1.2.2.6 - Convex Quadratic Minimization with Linear Constraints
Section 2.7.1.2.1.1.2.5 - Geometric programming
Section 2.7.1.2.1.1.2.6 - Entropy maximization with Appropriate Constraints
Section 2.7.1.2.1.1.2.7 - Basis pursuit
Section 2.7.1.2.1.1.2.7.0 - Basic Writeup - Wikipedia - Basis pursuit
Section 2.7.1.2.1.1.2.7.1 - Basis pursuit denoising (BPDN)
Section 2.7.1.2.1.1.2.7.1.0 - Basic Writeup - Wikipedia - Basis pursuit denoising (BPDN)
Section 2.7.1.2.1.1.2.7.1.1 - In-crowd algorithm
Section 2.7.1.2.1.1.2.8 - Linear matrix inequality
Section 2.7.1.2.1.1.2.9 - Bregman method
Section 2.7.1.2.1.1.2.10 - Proximal gradient method
Section 2.7.1.2.1.1.2.11 - Subgradient method
Section 2.7.1.2.1.1.2.12 - Biconvex optimization
Section 2.7.1.2.1.1.3 - Convex Optimization Classes of Methods
Section 2.7.1.2.1.1.3.1 - Bundle Methods
Section 2.7.1.2.1.1.3.2 - Sub-Gradient Projections
Section 2.7.1.2.1.1.3.3 - Interior Point Methods
Section 2.7.1.2.1.1.3.4 - Cutting Plane Methods
Section 2.7.1.2.1.1.3.5 - Ellipsoid Method
Section 2.7.1.2.1.1.3.6 - Subgradient Method
Section 2.7.1.2.1.1.3.7 - Dual Subgradient Drift Plus Penalty Method
Section 2.7.1.2.1.2 - Linear Programming (Optimization)
Section 2.7.1.2.1.2.0 - Basic Writeup - Wikipedia - Linear Programming (Optimization)
Section 2.7.1.2.1.2.1 - Integer Programming (Optimization)
Section 2.7.1.2.1.2.2 - Algorithms for linear programming
Section 2.7.1.2.1.2.2.1 - Simplex algorithm
Section 2.7.1.2.1.2.2.1.0 - Basic Writeup - Wikipedia - Simplex algorithm
Section 2.7.1.2.1.2.2.1.1 - Bland's rule
Section 2.7.1.2.1.2.2.1.2 - Klee–Minty cube
Section 2.7.1.2.1.2.2.1.3 - Criss-cross algorithm
Section 2.7.1.2.1.2.2.1.4 - Big M method
Section 2.7.1.2.1.2.2.2 - Interior point method
Section 2.7.1.2.1.2.2.2.0 - Basic Writeup - Wikipedia - Interior point method
Section 2.7.1.2.1.2.2.2.1 - Ellipsoid method
Section 2.7.1.2.1.2.2.2.2 - Karmarkar's algorithm
Section 2.7.1.2.1.2.2.2.3 - Mehrotra predictor–corrector method
Section 2.7.1.2.1.2.2.3 - Column generation
Section 2.7.1.2.1.2.2.4 - k-approximation of k-hitting set
Section 2.7.1.2.1.2.3 - Linear complementarity problem
Section 2.7.1.2.1.2.4 - Decompositions
Section 2.7.1.2.1.2.4.1 - Benders' decomposition
Section 2.7.1.2.1.2.4.2 - Dantzig–Wolfe decomposition
Section 2.7.1.2.1.2.4.3 - Theory of two-level planning
Section 2.7.1.2.1.2.4.4 - Variable splitting
Section 2.7.1.2.1.2.5 - Basic solution (linear programming)
Section 2.7.1.2.1.2.6 - Fourier–Motzkin elimination
Section 2.7.1.2.1.2.7 - Hilbert basis (linear programming)
Section 2.7.1.2.1.2.8 - LP-type problem
Section 2.7.1.2.1.2.9 - Linear inequality
Section 2.7.1.2.1.2.10 - Vertex enumeration problem
Section 2.7.1.2.1.3 - Quadratic programming
Section 2.7.1.2.1.3.0 - Basic Writeup - Wikipedia - Quadratic programming
Section 2.7.1.2.1.3.1 - Linear least squares (mathematics)
Section 2.7.1.2.1.3.2 - Total least squares
Section 2.7.1.2.1.3.3 - Frank–Wolfe algorithm
Section 2.7.1.2.1.3.4 - Sequential minimal optimization
Section 2.7.1.2.1.3.5 - Bilinear program
Section 2.7.1.2.1.4 - Fractional programming
Section 2.7.1.2.1.5 - Non-Linear programming
Section 2.7.1.2.1.5.0 - Basic Writeup - Wikipedia - Non-Linear programming
Section 2.7.1.2.1.5.1 - Special cases of nonlinear programming
Section 2.7.1.2.1.5.1.1 - Linear programming
Section 2.7.1.2.1.5.1.2 - Convex optimization
Section 2.7.1.2.1.5.1.3 - Geometric programming
Section 2.7.1.2.1.5.1.3.0 - Basic Writeup - Wikipedia - Geometric programming
Section 2.7.1.2.1.5.1.3.1 - Signomial
Section 2.7.1.2.1.5.1.3.2 - Posynomial
Section 2.7.1.2.1.5.1.4 - Quadratically constrained quadratic program
Section 2.7.1.2.1.5.1.5 - Linear-fractional programming
Section 2.7.1.2.1.5.1.5.0 - Basic Writeup - Wikipedia - Linear-fractional programming
Section 2.7.1.2.1.5.1.5.1 - Fractional programming
Section 2.7.1.2.1.5.1.6 - Nonlinear complementarity problem (NCP)
Section 2.7.1.2.1.5.1.7 - Least squares
Section 2.7.1.2.1.5.1.7.0 - Basic Writeup - Wikipedia - Least squares
Section 2.7.1.2.1.5.1.7.1 - Non-linear least squares
Section 2.7.1.2.1.5.1.7.2 - Gauss–Newton algorithm
Section 2.7.1.2.1.5.1.7.2.0 - Basic Writeup - Wikipedia - Gauss–Newton algorithm
Section 2.7.1.2.1.5.1.7.2.1 - BHHH algorithm
Section 2.7.1.2.1.5.1.7.2.2 - Generalized Gauss–Newton method
Section 2.7.1.2.1.5.1.7.3 - Levenberg–Marquardt algorithm
Section 2.7.1.2.1.5.1.7.4 - Iteratively reweighted least squares (IRLS)
Section 2.7.1.2.1.5.1.7.5 - Partial least squares
Section 2.7.1.2.1.5.1.7.5.0 - Basic Writeup - Wikipedia - Partial least squares
Section 2.7.1.2.1.5.1.7.5.1 - Non-linear iterative partial least squares (NIPLS)
Section 2.7.1.2.1.5.1.8 - Mathematical programming with equilibrium constraints
Section 2.7.1.2.1.5.1.9 - Univariate optimization
Section 2.7.1.2.1.5.1.9.1 - Golden section search
Section 2.7.1.2.1.5.1.9.2 - Successive parabolic interpolation
Section 2.7.1.2.1.5.2 - General algorithms
Section 2.7.1.2.1.5.2.1 - Concepts
Section 2.7.1.2.1.5.2.1.1 - Descent direction
Section 2.7.1.2.1.5.2.1.2 - Guess value
Section 2.7.1.2.1.5.2.1.3 - Line search
Section 2.7.1.2.1.5.2.1.3.0 - Basic Writeup - Wikipedia - Line search
Section 2.7.1.2.1.5.2.1.3.1 - Backtracking line search
Section 2.7.1.2.1.5.2.1.3.2 - Wolfe conditions
Section 2.7.1.2.1.5.2.2 - Gradient methods
Section 2.7.1.2.1.5.2.2.0 - Basic Writeup - Wikipedia - Gradient method
Section 2.7.1.2.1.5.2.2.1 - Gradient descent
Section 2.7.1.2.1.5.2.2.1.0 - Basic Writeup - Wikipedia - Gradient descent
Section 2.7.1.2.1.5.2.2.1.1 - Stochastic gradient descent
Section 2.7.1.2.1.5.2.2.2 - Landweber iteration
Section 2.7.1.2.1.5.2.3 - Successive linear programming (SLP)
Section 2.7.1.2.1.5.2.4 - Sequential quadratic programming (SQP)
Section 2.7.1.2.1.5.2.5 - Newton's method in optimization
Section 2.7.1.2.1.5.2.5.0 - Basic Writeup - Wikipedia - Newton's method in optimization
Section 2.7.1.2.1.5.2.5.1 - Kantorovich theorem
Section 2.7.1.2.1.5.2.5.2 - Newton fractal
Section 2.7.1.2.1.5.2.5.3 - Quasi-Newton method
Section 2.7.1.2.1.5.2.5.3.0 - Basic Writeup - Wikipedia - Quasi-Newton method
Section 2.7.1.2.1.5.2.5.3.1 - Broyden's method
Section 2.7.1.2.1.5.2.5.3.2 - Symmetric rank-one
Section 2.7.1.2.1.5.2.5.3.3 - Davidon–Fletcher–Powell formula
Section 2.7.1.2.1.5.2.5.3.4 - Broyden–Fletcher–Goldfarb–Shanno algorithm
Section 2.7.1.2.1.5.2.5.3.5 - Limited-memory BFGS method
Section 2.7.1.2.1.5.2.5.4 - Steffensen's method
Section 2.7.1.2.1.5.2.6 - Nonlinear conjugate gradient method
Section 2.7.1.2.1.5.2.7 - Derivative-free methods
Section 2.7.1.2.1.5.2.7.1 - Coordinate descent
Section 2.7.1.2.1.5.2.7.1.0 - Basic Writeup - Wikipedia - Coordinate descent
Section 2.7.1.2.1.5.2.7.1.1 - Adaptive coordinate descent
Section 2.7.1.2.1.5.2.7.1.2 - Random coordinate descent
Section 2.7.1.2.1.5.2.7.2 - Nelder–Mead method
Section 2.7.1.2.1.5.2.7.3 - Pattern search (optimization)
Section 2.7.1.2.1.5.2.7.4 - Powell's method
Section 2.7.1.2.1.5.2.7.5 - Rosenbrock methods
Section 2.7.1.2.1.5.2.8 - Augmented Lagrangian method
Section 2.7.1.2.1.5.2.9 - Ternary search
Section 2.7.1.2.1.5.2.10 - Tabu search
Section 2.7.1.2.1.5.2.11 - Guided Local Search
Section 2.7.1.2.1.5.2.12 - Reactive search optimization (RSO)
Section 2.7.1.2.1.5.2.13 - MM algorithm — majorize-minimization
Section 2.7.1.2.1.5.2.14 - Least absolute deviations
Section 2.7.1.2.1.5.2.14.0 - Basic Writeup - Wikipedia - Least absolute deviations
Section 2.7.1.2.1.5.2.14.1 - Expectation–maximization algorithm
Section 2.7.1.2.1.5.2.14.1.0 - Basic Writeup - Wikipedia - Expectation–maximization algorithm
Section 2.7.1.2.1.5.2.14.1.1 - Ordered subset expectation maximization
Section 2.7.1.2.1.5.2.15 - Nearest neighbor search
Section 2.7.1.2.1.5.2.16 - Space mapping
Section 2.7.1.2.1.6 - Stochastic programming
Section 2.7.1.2.1.7 - Robust Optimization
Section 2.7.1.2.1.8 - Combinatorial Optimization
Section 2.7.1.2.1.9 - Stochastic Optimization
Section 2.7.1.2.1.10 - Infinite-dimensional Optimization
Section 2.7.1.2.1.11 - Heuristics Optimization
Section 2.7.1.2.1.12 - Meta Heuristics Optimization(See Section 10.11.1.1 - Evolutionary Algorithms)
Section 2.7.1.2.1.13 - Constraints Programming
Section 2.7.1.2.1.14 - Disjunctive programming
Section 2.7.1.2.1.15 - Space mapping
Section 2.7.1.2.2 - Multi-Objective Optimization
Section 2.7.1.2.2.0 - Introductory Writeups
Section 2.7.1.2.2.0.1 - Basic Writeup - Wikipedia - Multi-Objective Optimization
Section 2.7.1.2.2.0.2 - Detailed Writeup - Wikipedia - Multi-Objective Optimization
Section 2.7.1.2.2.1 - No Preference Methods
Section 2.7.1.2.2.1.0 - Basic Writeup - Miettinen - Introduction to MultiObjective Optimization: NonInteractive Approaches
Section 2.7.1.2.2.1.1 - Method of Global Criterion
Section 2.7.1.2.2.1.2 - Neutral Compromise Solution
Section 2.7.1.2.2.2 - A-Priori Methods
Section 2.7.1.2.2.2.0 - Basic Writeup - Meittinen - Introduction to MultiObjective Optimization: NonInteractive Approaches
Section 2.7.1.2.2.2.1 - Value Function Method
Section 2.7.1.2.2.2.2 - Lexicographic Ordering
Section 2.7.1.2.2.2.3 - Goal Programming
Section 2.7.1.2.2.3 - A Posteriori Methods
Section 2.7.1.2.2.3.0 - Basic Writeup - Wikipedia - MultiObjective Optimization: A Posteriori Methods
Section 2.7.1.2.2.3.1 - Normal Boundary Intersection (NBI)
Section 2.7.1.2.2.3.2 - Modified Normal Boundary Intersection (NBIm)
Section 2.7.1.2.2.3.3 - Genetic Algorithm Based Normal Boundary Intersection (GANBI)
Section 2.7.1.2.2.3.4 - Normal Constaint (NC)
Section 2.7.1.2.2.3.5 - Successive Pareto Optimization (SPO)
Section 2.7.1.2.2.3.6 - Directed Search Domain
Section 2.7.1.2.2.3.7 - NSGA-II
Section 2.7.1.2.2.3.8 - Pareto Surface Generation (PGEN)
Section 2.7.1.2.2.3.9 - Indirect Optimization on the basis of Self-Organization (IOSO)
Section 2.7.1.2.2.3.10 - S-Metric Selection Evolutionary Multi-Objective Algorithm (SMS-EMOA)
Section 2.7.1.2.2.3.11 - Approximation Guided Evolution
Section 2.7.1.2.2.3.12 - Reactive Search Optimization
Section 2.7.1.2.2.3.13 - Benson's Algorithm
Section 2.7.1.2.2.3.14 - Multi-Objective Particle Swarm Optimization (MOPSO)
Section 2.7.1.2.2.3.15 - Subpopulation Algorithm Based on Novelty
Section 2.7.1.2.2.4 - Interactive Methods
Section 2.7.1.2.2.4.0 - Basic Writeup - Mietten,Ruiz,Wierzibicki - Introduction to MultiObjective Optimization: Interactive Approaches
Section 2.7.1.2.2.4.1 - Trade Off Methods
Section 2.7.1.2.2.4.1.1 - Zionts and Wallenius Method (ZW Method)
Section 2.7.1.2.2.4.1.2 - Interactive Surrogate Worth Trade-off Method (ISWT)
Section 2.7.1.2.2.4.1.3 - Interactive Geoffrion, Dyer and Feinberg (GDF)
Section 2.7.1.2.2.4.1.4 - Sequential Proxy Optimization Technique (SPOT)
Section 2.7.1.2.2.4.1.5 - Gradient Based Interactive Step Trade-off Method(GRIST)
Section 2.7.1.2.2.4.2 - Reference Point Approaches
Section 2.7.1.2.2.4.3 - Classification-Based Methods
Section 2.7.1.2.2.4.3.0 - Basic Writeup - Mietten,Ruiz,Wierzibicki - Introduction to MultiObjective Optimization: Interactive Approaches
Section 2.7.1.2.2.4.3.1 - Step Method (STEM)
Section 2.7.1.2.2.4.3.2 - Satisficing Trade-off Method (STOM)
Section 2.7.1.2.2.4.3.3 - Nimbus Method
Section 2.7.1.2.2.4.3.4 - Interactive Reference Direction Algorithm
Section 2.7.1.2.2.4.3.5 - Interactive decision making approach (NIDMA)
Section 2.7.1.2.2.5 - Hybrid Methods
Section 2.7.1.2.2.5.0 - Basic Writeup - Goel,Deb - Hybrid Methods for Multi-Objective Evolutionary Algorithms
Section 2.7.1.2.2.5.1 - Posteriori Approach
Section 2.7.1.2.2.5.2 - Online Approach
Section 2.7.1.2.3 - Evolutionary Multimodal Optimization
Section 2.7.1.3 - Common Local and Global Computational Optimization Techniques
Section 2.7.1.3.0 - Introductory Writeups
Section 2.7.1.3.0.1 - Basic Writeup - Wikipedia - Mathematical Optimization: Computational Optimization Techniques
Section 2.7.1.3.0.2 - Detailed Writeup - Text Book - Yang - Computational Optimization, Methods and Algorithms
Section 2.7.1.3.1 - Optimization algorithms
Section 2.7.1.3.2 - Iterative Methods
Section 2.7.1.3.2.0 - Basic Writeup
Section 2.7.1.3.2.1 - Hessian Iterative Methods in Optimizations
Section 2.7.1.3.2.1.0 - Basic Writeup - Wikipedia - Hessian Matrix: Use in Optimization
Section 2.7.1.3.2.1.1 - Newtons's Method
Section 2.7.1.3.2.1.2 - Sequential Quadratic Programming
Section 2.7.1.3.2.1.3 - Interior Point Method
Section 2.7.1.3.2.2 - Gradient, Approximate Gradient and Sub-Gradient Methods
Section 2.7.1.3.2.2.1 - Coordinate Descent Methods
Section 2.7.1.3.2.2.2 - Conjugate gradient methods
Section 2.7.1.3.2.2.3 - Gradient Descent
Section 2.7.1.3.2.2.3.1 - Introductory Writeups
Section 2.7.1.3.2.2.3.1.1 - Detailed Writeups
Section 2.7.1.3.2.2.3.1.1.1 - Detailed Writeup - Ruder - Optimizing Gradient Descent
Section 2.7.1.3.2.2.3.1.1.2 - Detailed Writeup - Wikipedia - Gradient Descent
Section 2.7.1.3.2.2.3.2 - Visualizing and Animating Optimization Algorithms with Matplotlib
Section 2.7.1.3.2.2.3.3 - Batch Gradient Descent
Section 2.7.1.3.2.2.3.4 - Stochastic Gradient Descent (SGD)
Section 2.7.1.3.2.2.3.4.0 - Basic Writeup - Wikipedia - Stochastic Gradient Descent
Section 2.7.1.3.2.2.3.4.1 - Accelerated Gradient Descent
Section 2.7.1.3.2.2.3.4.2 - Nesterov Accelerated Gradient Descent
Section 2.7.1.3.2.2.3.4.3 - Adaptive Gradient Descent (AdaGrad)
Section 2.7.1.3.2.2.3.4.4 - Adaptive Delta (AdaDelta)
Section 2.7.1.3.2.2.3.4.5 - Root Mean Square Prop (RMS Prop)
Section 2.7.1.3.2.2.3.4.6 - Adaptive Moment Estimation (Adam)
Section 2.7.1.3.2.2.3.4.6.0 - Basic Writeup - Ruder - Optimizing Gradient Descent: Adam
Section 2.7.1.3.2.2.3.4.6.1 - Nesterov Adam (Nadam)
Section 2.7.1.3.2.2.3.4.6.2 - Adaptive Moment Maximization
Section 2.7.1.3.2.2.3.4.7 - Kalman Based Stochastic Gradient Descent (kSGD)
Section 2.7.1.3.2.2.4 - Subgradient Methods
Section 2.7.1.3.2.2.5 - Bundle Method Of Descent
Section 2.7.1.3.2.2.6 - Ellipsoid Method
Section 2.7.1.3.2.2.7 - Reduced gradient method (Frank–Wolfe)
Section 2.7.1.3.2.2.8 - Quasi-Newton Methods
Section 2.7.1.3.2.2.8.0 - Basic Writeup - Wikipedia - Quasi-Newton Method
Section 2.7.1.3.2.2.8.1 - Broyden-Fletcher-Goldfarb-Shanno (BFGS)
Section 2.7.1.3.2.2.8.2 - L-BFGS (Using Limited Computer Memory)
Section 2.7.1.3.2.2.9 - Simultaneous Perturbation Stochastic Approximation (SPSA)
Section 2.7.1.3.2.2.10 - Proximal Gradient Methods
Section 2.7.1.3.3 - Global Optimization
Section 2.7.1.3.3.0 - Basic Writeup - Wikipedia - Global Optmization
Section 2.7.1.3.3.1 - Deterministic Methods
Section 2.7.1.3.3.1.0 - Introductory Writeups
Section 2.7.1.3.3.1.0.1 - Basic Writeup - Wikipedia - Deterministic Global Optimization
Section 2.7.1.3.3.1.0.2 - Detailed Writeup - Floudas - Deterministic Global Optimization: Advances in Theory and Applications
Section 2.7.1.3.3.1.1 - Inner-Outer Approximation
Section 2.7.1.3.3.1.2 - Cutting Plane Methods
Section 2.7.1.3.3.1.3 - Branch and Bound Method
Section 2.7.1.3.4.1.4 Interval Arithmetic Method
Section 2.7.1.3.5.1.5 Methods based on Real Algebraic Geometry
Section 2.7.1.3.3.2 - Stochastic Methods
Section 2.7.1.3.3.2.0 - Basic Writeup - Wikipedia - Stochastic Optimization
Section 2.7.1.3.3.2.1 - Direct Monte-Carlo Sampling
Section 2.7.1.3.3.2.2 - Stochastic Tunneling (STUN)
Section 2.7.1.3.3.2.3 - Parallel Tempering
Section 2.7.1.3.3.3 - Heuristic and Meta-Heuristic Methods
Section 2.7.1.3.3.3.0 - Basic Writeup - Wikipedia - Metaheuristic Methods
Section 2.7.1.3.3.3.1 - Simulated Annealing
Section 2.7.1.3.3.3.2 - Tabu Search
Section 2.7.1.3.3.3.3 - Evolutionary Algorithms
Section 2.7.1.3.3.3.4 - Differential Evolution
Section 2.7.1.3.3.3.5 - Swarm-Based Optimization Algorithms
Section 2.7.1.3.3.3.5.0 - Basic Writeup - Wikipedia - Swarm Intelligence
Section 2.7.1.3.3.3.5.1 - Particle Swarm Optimization
Section 2.7.1.3.3.3.5.2 - Social Cognitive Optimization
Section 2.7.1.3.3.3.5.3 - Multi-swarm Optimization
Section 2.7.1.3.3.3.5.4 - Ant Colony Optimization
Section 2.7.1.3.3.3.6 - Memetic Algorithms
Section 2.7.1.3.3.3.7 - Graduated Optimization
Section 2.7.1.3.3.3.8 - Reactive Search Optimization
Section 2.7.1.4 Objective Functions
Section 2.7.1.4.1 - Loss Functions
Section 2.7.1.4.1.0 - Basic Writeup - Wikipedia - Loss Function
Section 2.7.1.4.1.1 - Regret Loss Functions
Section 2.7.1.4.1.2 - Quadratic Loss Functions
Section 2.7.1.4.1.3 - Common Machine Learning Loss Functions
Section 2.7.1.4.1.3.0 - Basic Writeup - Prince Grover - 5 Regression Loss Functions All Machine Learners Should Know
Section 2.7.1.4.1.3.1 - Regression Loss Functions
Section 2.7.1.4.1.3.1.1 - Mean Squared Error / Quadratic Error / L2 Loss
Section 2.7.1.4.1.3.1.2 - Root Mean Squared Error
Section 2.7.1.4.1.3.1.3 - Mean Absolute Error
Section 2.7.1.4.1.3.1.4 - Mean Absolute Percentage Error
Section 2.7.1.4.1.3.1.5 - Mean Squared Logarithmic Error Loss
Section 2.7.1.4.1.3.1.6 - Huber Loss / Smooth Mean Absolute Error Loss
Section 2.7.1.4.1.3.1.7 - Log cosh Loss
Section 2.7.1.4.1.3.1.8 - Quantile Loss (Loss in Quantile Regression)
Section 2.7.1.4.1.3.2 - Classification Loss Functions
Section 2.7.1.4.1.3.2.1 - Square Loss
Section 2.7.1.4.1.3.2.2 - Hinge Loss
Section 2.7.1.4.1.3.2.3 - Generalized Smooth Hinge Loss
Section 2.7.1.4.1.3.2.4 - Logistic Loss
Section 2.7.1.4.1.3.2.5 - Cross entropy loss (Log Loss)
Section 2.7.1.4.1.3.2.6 - Sparse and Multi-Hot Sparse Categorical Cross entropy loss (Log Loss)
Section 2.7.1.4.1.3.2.7 - Kullback–Leibler divergence (Relative Entropy)
Section 2.7.1.4.1.3.2.8 - Poisson Loss
Section 2.7.1.4.1.3.2.9 - Cosine Proximity Loss
Section 2.7.1.4.1.3.2.10 - Exponential Loss
Section 2.7.1.4.2 - Reward Functions
Section 2.7.1.4.3 - Profit Functions
Section 2.7.1.4.4 - Utiltity Functions
Section 2.7.1.4.4.0 - Basic Writeup - Wikipedia - Utility Functions
Section 2.7.1.4.4.1 - CES (Constant Elasticity of Substituion) Utility
Section 2.7.1.4.4.2 - Isoelastic Utility
Section 2.7.1.4.4.3 - Exponential Utility
Section 2.7.1.4.4.4 - Quasi-Linear Utility
Section 2.7.1.4.4.5 - Homothetic Preferences
Section 2.7.1.4.4.6 - Stone-Geary Utility
Section 2.7.1.4.4.7 - Von Neumann-Morgenstern Utility
Section 2.7.1.4.4.8 - Hyperbolic Absolute Risk Aversion
Section 2.7.1.4.5 - Demand Functions
Section 2.7.1.4.5.0 - Basic Writeup - Wikipedia - Demand Functions
Section 2.7.1.4.5.1 - Hicksian Demand
Section 2.7.1.4.5.2 - Inverse Demand Function
Section 2.7.1.4.5.3 - Marshallian Demand Function
Section 2.7.1.5 - Optimal control and infinite-dimensional optimization
Section 2.7.1.5.1 - Optimal Control
Section 2.7.1.5.1.0 - Basic Writeup - Wikipedia - Optimal Control
Section 2.7.1.5.1.1 - Pontryagin's minimum principle
Section 2.7.1.5.1.1.0 - Basic Writeup - Wikipedia - Pontryagin's minimum principle
Section 2.7.1.5.1.1.1 - Costate equations
Section 2.7.1.5.1.1.2 - Hamiltonian (control theory)
Section 2.7.1.5.1.2 - Types of problems
Section 2.7.1.5.1.2.1 - Linear-quadratic regulator
Section 2.7.1.5.1.2.2 - Linear-quadratic-Gaussian control (LQG)
Section 2.7.1.5.1.2.2.0 - Basic Writeup - Wikipedia - Linear-quadratic-Gaussian control (LQG)
Section 2.7.1.5.1.2.2.1 - Optimal projection equations
Section 2.7.1.5.1.3 - Algebraic Riccati equation
Section 2.7.1.5.1.4 - Bang–bang control
Section 2.7.1.5.1.5 - Covector mapping principle
Section 2.7.1.5.1.6 - Differential dynamic programming
Section 2.7.1.5.1.7 - DNSS point
Section 2.7.1.5.1.8 - Legendre–Clebsch condition
Section 2.7.1.5.1.9 - Pseudospectral optimal control
Section 2.7.1.5.1.9.0 - Basic Writeup - Wikipedia - Pseudospectral optimal control
Section 2.7.1.5.1.9.1 - Bellman pseudospectral method
Section 2.7.1.5.1.9.2 - Chebyshev pseudospectral method
Section 2.7.1.5.1.9.3 - Flat pseudospectral method
Section 2.7.1.5.1.9.4 - Gauss pseudospectral method
Section 2.7.1.5.1.9.5 - Legendre pseudospectral method
Section 2.7.1.5.1.9.6 - Pseudospectral knotting method
Section 2.7.1.5.1.9.7 - Ross–Fahroo pseudospectral method
Section 2.7.1.5.1.10 - Ross–Fahroo lemma
Section 2.7.1.5.1.11 - Ross' π lemma
Section 2.7.1.5.1.12 - Sethi model
Section 2.7.1.5.2 - Infinite-dimensional optimization
Section 2.7.1.5.2.0 - Basic Writeup - Wikipedia - Infinite-dimensional optimization
Section 2.7.1.5.2.1 - Semi-infinite programming
Section 2.7.1.5.2.2 - Shape and Topology optimization
Section 2.7.1.5.2.2.0 - Basic Writeups
Section 2.7.1.5.2.2.0.1 - Part 1 - Wikipeda - Shape Optimization
Section 2.7.1.5.2.2.0.2 - Part 2 - Wikipeda - Toplogy Optimization
Section 2.7.1.5.2.2.1 - Topological derivative
Section 2.7.1.5.2.3 - Generalized semi-infinite programming
Section 2.7.1.6 - Dealing with Uncertainty and Randomness
Section 2.7.1.6.1 - Approaches to deal with uncertainty
Section 2.7.1.6.1.1 - Markov decision process
Section 2.7.1.6.1.2 - Partially observable Markov decision process
Section 2.7.1.6.1.3 - Robust optimization
Section 2.7.1.6.1.3.0 - Basic Writeup - Wikipedia - Robust optimization
Section 2.7.1.6.1.3.1 - Wald's maximin model
Section 2.7.1.6.1.4 - Scenario optimization
Section 2.7.1.6.1.5 - Stochastic approximation
Section 2.7.1.6.1.6 - Stochastic optimization
Section 2.7.1.6.1.7 - Stochastic programming
Section 2.7.1.6.1.8 - Stochastic gradient descent
Section 2.7.1.6.2 - Random optimization algorithms
Section 2.7.1.6.2.0 - Basic Writeup - Wikipedia - Random optimization algorithms
Section 2.7.1.6.2.1 - Random search
Section 2.7.1.6.2.2 - Simulated annealing
Section 2.7.1.6.2.2.1 - Basic Writeup - Wikipedia - Simulated annealing
Section 2.7.1.6.2.2.2 - Adaptive simulated annealing
Section 2.7.1.6.2.2.3 - Great Deluge algorithm
Section 2.7.1.6.2.2.4 - Mean field annealing
Section 2.7.1.6.2.3 - Bayesian optimization
Section 2.7.1.6.2.4 - Evolutionary algorithm
Section 2.7.1.6.2.4.0 - Basic Writeup - Wikipedia - Evolutionary algorithm
Section 2.7.1.6.2.4.1 - Differential evolution
Section 2.7.1.6.2.4.2 - Evolutionary programming
Section 2.7.1.6.2.4.3 - Genetic algorithm
Section 2.7.1.6.2.4.4 - Genetic programming
Section 2.7.1.6.2.4.5 - Multiple Coordinated Agents Coevolution Evolutionary Algorithm(MCACEA)
Section 2.7.1.6.2.4.6 - Simultaneous perturbation stochastic approximation (SPSA)
Section 2.7.1.6.2.5 - Luus–Jaakola
Section 2.7.1.6.2.6 - Particle swarm optimization
Section 2.7.1.6.2.7 - Stochastic tunneling
Section 2.7.1.6.2.8 - Harmony search
Section 2.7.1.6.2.9 - Monte Carlo method
Section 2.7.1.6.2.9.0 - Basic Writeup - Wikipedia - Monte Carlo method
Section 2.7.1.6.2.9.1 - Variants of the Monte Carlo method
Section 2.7.1.6.2.9.1.1 - Direct simulation Monte Carlo
Section 2.7.1.6.2.9.1.2 - Quasi-Monte Carlo method
Section 2.7.1.6.2.9.1.3 - Markov chain Monte Carlo
Section 2.7.1.6.2.9.1.3.0 - Basic Writeup - Wikipedia - Markov chain Monte Carlo
Section 2.7.1.6.2.9.1.3.1 - Metropolis–Hastings algorithm
Section 2.7.1.6.2.9.1.3.1.0 - Basic Writeup - Wikipedia - Metropolis–Hastings algorithm
Section 2.7.1.6.2.9.1.3.1.1 - Multiple-try Metropolis
Section 2.7.1.6.2.9.1.3.1.2 - Wang and Landau algorithm
Section 2.7.1.6.2.9.1.3.1.3 - Equation of State Calculations by Fast Computing Machines
Section 2.7.1.6.2.9.1.3.1.4 - Multicanonical ensemble
Section 2.7.1.6.2.9.1.3.2 - Gibbs sampling
Section 2.7.1.6.2.9.1.3.3 - Coupling from the past
Section 2.7.1.6.2.9.1.3.4 - Reversible-jump Markov chain Monte Carlo
Section 2.7.1.6.2.9.1.4 - Dynamic Monte Carlo method
Section 2.7.1.6.2.9.1.4.0 - Basic Writeup - Wikipedia - Dynamic Monte Carlo method
Section 2.7.1.6.2.9.1.4.1 - Kinetic Monte Carlo
Section 2.7.1.6.2.9.1.4.2 - Gillespie algorithm
Section 2.7.1.6.2.9.1.5 - Particle filter
Section 2.7.1.6.2.9.1.5.0 - Basic Writeup - Wikipedia - Particle filter
Section 2.7.1.6.2.9.1.5.1 - Auxiliary particle filter
Section 2.7.1.6.2.9.1.6 - Reverse Monte Carlo
Section 2.7.1.6.2.9.1.7 - Demon algorithm
Section 2.7.1.6.2.9.2 - Pseudo-random number sampling
Section 2.7.1.6.2.9.2.0 - Basic Writeup - Wikipedia - Pseudo-random number sampling
Section 2.7.1.6.2.9.2.1 - Inverse transform sampling
Section 2.7.1.6.2.9.2.2 - Rejection sampling
Section 2.7.1.6.2.9.2.2.0 - Basic Writeup - Wikipedia - Rejection Sampling
Section 2.7.1.6.2.9.2.2.1 - Ziggurat algorithm
Section 2.7.1.6.2.9.2.3 - For sampling from a normal distribution
Section 2.7.1.6.2.9.2.3.1 - Box–Muller transform
Section 2.7.1.6.2.9.2.3.2 - Marsaglia polar method
Section 2.7.1.6.2.9.2.4 - Convolution random number generator
Section 2.7.1.6.2.9.2.5 - Indexed search
Section 2.7.1.6.2.9.3 - Variance reduction techniques
Section 2.7.1.6.2.9.3.0 - Basic Writeup - Wikipedia - Variance reduction techniques
Section 2.7.1.6.2.9.3.1 - Antithetic variates
Section 2.7.1.6.2.9.3.2 - Control variates
Section 2.7.1.6.2.9.3.3 - Importance sampling
Section 2.7.1.6.2.9.3.4 - Stratified sampling
Section 2.7.1.6.2.9.3.5 - VEGAS algorithm
Section 2.7.1.6.2.9.4 - Low-discrepancy sequence
Section 2.7.1.6.2.9.4.0 - Basic Writeup - Wikipedia - Low-discrepancy sequence
Section 2.7.1.6.2.9.4.1 - Constructions of low-discrepancy sequences
Section 2.7.1.6.2.9.5 - Event generator
Section 2.7.1.6.2.9.6 - Parallel tempering
Section 2.7.1.6.2.9.7 - Umbrella sampling
Section 2.7.1.6.2.9.8 - Hybrid Monte Carlo
Section 2.7.1.6.2.9.9 - Ensemble Kalman filter
Section 2.7.1.6.2.9.10 - Transition path sampling
Section 2.7.1.6.2.9.11 - Walk-on-spheres method
Section 2.7.1.7 - Theoretical Aspects
Section 2.7.1.7.1 - Convex analysis
Section 2.7.1.7.1.0 - Basic Writeup - Wikipedia - Convex analysis
Section 2.7.1.7.1.1 - Pseudoconvex function
Section 2.7.1.7.1.2 - Quasiconvex function
Section 2.7.1.7.1.3 - Subderivative
Section 2.7.1.7.1.4 - Geodesic convexity
Section 2.7.1.7.2 - Duality (optimization)
Section 2.7.1.7.2.0 - Basic Writeup - Wikipedia - Duality (optimization)
Section 2.7.1.7.2.1 - Perturbation function
Section 2.7.1.7.2.2 - Slater's condition
Section 2.7.1.7.2.3 - Duality gap
Section 2.7.1.7.2.4 - Weak duality
Section 2.7.1.7.2.5 - Strong duality
Section 2.7.1.7.2.6 - Fenchel's duality theorem
Section 2.7.1.7.2.7 - Wolfe duality
Section 2.7.1.7.2.8 - Total dual integrality
Section 2.7.1.7.2.9 - Shadow price
Section 2.7.1.7.2.10 - Dual Cone and Polar Cone
Section 2.7.1.7.3 - Farkas' lemma
Section 2.7.1.7.4 - Karush–Kuhn–Tucker conditions (KKT)
Section 2.7.1.7.4.0 - Basic Writeup - Wikipedia - Karush–Kuhn–Tucker conditions (KKT)
Section 2.7.1.7.4.1 - Fritz John conditions — variant of KKT conditions
Section 2.7.1.7.5 - Lagrange multiplier
Section 2.7.1.7.5.0 - Basic Writeup - Wikipedia - Lagrange multiplier
Section 2.7.1.7.5.1 - Lagrange multipliers on Banach spaces
Section 2.7.1.7.6 - Semi-continuity
Section 2.7.1.7.7 - Complementarity theory
Section 2.7.1.7.7.0 - Basic Writeup - Wikipedia - Complementarity theory
Section 2.7.1.7.7.1 - Mixed complementarity problem
Section 2.7.1.7.7.1.0 - Basic Writeup - Wikipedia - Mixed complementarity problem
Section 2.7.1.7.7.1.1 - Mixed linear complementarity problem
Section 2.7.1.7.7.1.2 - Lemke's algorithm
Section 2.7.1.7.8 - Danskin's theorem
Section 2.7.1.7.9 - Maximum theorem
Section 2.7.1.7.10 - No free lunch in search and optimization
Section 2.7.1.7.11 - Relaxation (approximation)
Section 2.7.1.7.11.0 - Basic Writeup - Wikipedia - Relaxation (approximation)
Section 2.7.1.7.11.1 - Lagrangian relaxation
Section 2.7.1.7.11.2 - Linear programming relaxation
Section 2.7.1.7.12 - Self-concordant function
Section 2.7.1.7.13 - Reduced cost
Section 2.7.1.7.14 - Hardness of approximation
Section 2.7.1.7.15 - Test Functions for Optimization
Section 2.7.1.7.15.0 - Basic Writeup - Wikipedia - Test Functions for Optimization
Section 2.7.1.7.15.1 - Rosenbrock function
Section 2.7.1.7.15.2 - Himmelblau's function
Section 2.7.1.7.15.3 - Rastrigin function
Section 2.7.1.7.15.4 - Shekel function
Section 2.7.2.1 - Interpolation
Section 2.7.2.1.0 - Basic Writeup - Wikipedia - Interpolation
Section 2.7.2.1.1 - Nearest-neighbor interpolation
Section 2.7.2.2 - Polynomial interpolation
Section 2.7.2.2.0 - Basic Writeup - Wikipedia - Polynomial interpolation
Section 2.7.2.2.1 - Linear interpolation
Section 2.7.2.2.2 - Runge's phenomenon
Section 2.7.2.2.3 - Vandermonde matrix
Section 2.7.2.2.4 - Chebyshev polynomials
Section 2.7.2.2.5 - Chebyshev nodes
Section 2.7.2.2.6 - Lebesgue constant (interpolation)
Section 2.7.2.2.7 - Different forms for the interpolant
Section 2.7.2.2.7.1 - Newton polynomial
Section 2.7.2.2.7.1.0 - Basic Writeup - Wikipedia - Newton polynomial
Section 2.7.2.2.7.1.1 - Divided differences
Section 2.7.2.2.7.1.2 - Neville's algorithm
Section 2.7.2.2.7.2 - Lagrange polynomial
Section 2.7.2.2.7.3 - Bernstein polynomial
Section 2.7.2.2.7.4 - Brahmagupta's interpolation formula
Section 2.7.2.2.8 - Extensions to multiple dimensions
Section 2.7.2.2.8.1 - Bilinear interpolation
Section 2.7.2.2.8.2 - Trilinear interpolation
Section 2.7.2.2.8.3 - Bicubic interpolation
Section 2.7.2.2.8.4 - Tricubic interpolation
Section 2.7.2.2.8.5 - Padua points
Section 2.7.2.2.9 - Hermite interpolation
Section 2.7.2.2.11 - Abel–Goncharov interpolation
Section 2.7.2.3 - Spline interpolation
Section 2.7.2.3.0 - Basic Writeups
Section 2.7.2.3.0.1 - Part 1 - Wikipedia - Spline (mathematics)
Section 2.7.2.3.0.2 - Part 2 - Wikipedia - Spline interpolation
Section 2.7.2.3.1 - Perfect spline
Section 2.7.2.3.2 - Cubic Hermite spline
Section 2.7.2.3.2.0 - Basic Writeup - Wikipedia - Cubic Hermite spline
Section 2.7.2.3.2.1 - Centripetal Catmull–Rom spline
Section 2.7.2.3.3 - Monotone cubic interpolation
Section 2.7.2.3.4 - Hermite spline
Section 2.7.2.3.5 - Bézier curve
Section 2.7.2.3.5.0 - Basic Writeup - Wikipedia - Bézier curve
Section 2.7.2.3.5.1 - De Casteljau's algorithm
Section 2.7.2.3.5.2 - Composite Bézier curve
Section 2.7.2.3.5.3 - Generalizations to more dimensions
Section 2.7.2.3.5.3.1 - Bézier triangle
Section 2.7.2.3.5.3.2 - Bézier surface
Section 2.7.2.3.6 - B-spline
Section 2.7.2.3.6.0 - Basic Writeup - Wikipedia - B-spline
Section 2.7.2.3.6.1 - Box spline
Section 2.7.2.3.6.2 - Truncated power function
Section 2.7.2.3.6.3 - De Boor's algorithm
Section 2.7.2.3.7 - Non-uniform rational B-spline (NURBS)
Section 2.7.2.3.7.0 - Basic Writeup - Wikipedia - Non-uniform rational B-spline (NURBS)
Section 2.7.2.3.7.1 - T-spline
Section 2.7.2.3.8 - Kochanek–Bartels spline
Section 2.7.2.3.9 - Coons patch
Section 2.7.2.3.10 - M-spline
Section 2.7.2.3.11 - I-spline
Section 2.7.2.3.12 - Smoothing spline
Section 2.7.2.3.13 - Blossom (functional)
Section 2.7.2.4 - Trigonometric interpolation
Section 2.7.2.4.0 - Basic Writeup - Wikipedia - Trigonometric interpolation
Section 2.7.2.4.1 - Discrete Fourier transform
Section 2.7.2.4.1.0 - Basic Writeup - Wikipedia - Discrete Fourier transform
Section 2.7.2.4.1.1 - Relations between Fourier transforms and Fourier series
Section 2.7.2.4.2 - Fast Fourier transform (FFT)
Section 2.7.2.4.2.0 - Basic Writeup - Wikipedia - Fast Fourier transform (FFT)
Section 2.7.2.4.2.1 - Bluestein's FFT algorithm
Section 2.7.2.4.2.2 - Bruun's FFT algorithm
Section 2.7.2.4.2.3 - Cooley–Tukey FFT algorithm
Section 2.7.2.4.2.4 - Split-radix FFT algorithm
Section 2.7.2.4.2.5 - Goertzel algorithm
Section 2.7.2.4.2.6 - Prime-factor FFT algorithm
Section 2.7.2.4.2.7 - Rader's FFT algorithm
Section 2.7.2.4.2.8 - Bit-reversal permutation
Section 2.7.2.4.2.9 - Butterfly diagram
Section 2.7.2.4.2.10 - Twiddle factor
Section 2.7.2.4.2.11 - Cyclotomic fast Fourier transform
Section 2.7.2.4.2.12 - Methods for computing discrete convolutions with finite impulse response filters using the FFT
Section 2.7.2.4.2.12.1 - Overlap–add method
Section 2.7.2.4.2.12.2 - Overlap–save method
Section 2.7.2.4.3 - Sigma approximation
Section 2.7.2.4.4 - Dirichlet kernel
Section 2.7.2.4.5 - Gibbs phenomenon
Section 2.7.2.5 - Other interpolants
Section 2.7.2.5.1 - Simple rational approximation
Section 2.7.2.5.1.0 - Basic Writeup - Wikipedia - Simple rational approximation
Section 2.7.2.5.1.1 - Polynomial and rational function modeling
Section 2.7.2.5.2 - Wavelet
Section 2.7.2.5.2.0 - Basic Writeup - Wikipedia - Wavelet
Section 2.7.2.5.2.1 - Continuous wavelet
Section 2.7.2.5.2.2 - Transfer matrix
Section 2.7.2.5.2.3 - Discrete wavelet transform (DWT)
Section 2.7.2.5.2.4 - Multiresolution analysis (MRA)
Section 2.7.2.5.2.5 - Lifting scheme
Section 2.7.2.5.2.6 - Binomial QMF (BQMF)
Section 2.7.2.5.2.7 - Fast wavelet transform (FWT)
Section 2.7.2.5.2.8 - Complex wavelet transform
Section 2.7.2.5.2.9 - Complex wavelet transform
Section 2.7.2.5.2.10 - Non or undecimated wavelet transform
Section 2.7.2.5.2.11 - Non or undecimated wavelet transform
Section 2.7.2.5.2.12 - Newland transform
Section 2.7.2.5.2.13 - Wavelet packet decomposition (WPD)
Section 2.7.2.5.2.14 - Stationary wavelet transform (SWT)
Section 2.7.2.5.2.15 - Second generation wavelet transform (SGWT)
Section 2.7.2.5.2.16 - Dual-tree complex wavelet transform (DTCWT)
Section 2.7.2.5.3 - Inverse distance weighting
Section 2.7.2.5.4 - Radial basis function (RBF)
Section 2.7.2.5.4.0 - Basic Writeup - Wikipedia - Radial basis function (RBF)
Section 2.7.2.5.4.1 - Polyharmonic spline
Section 2.7.2.5.4.2 - Thin plate spline
Section 2.7.2.5.4.3 - Hierarchical RBF
Section 2.7.2.5.5 - Subdivision surface
Section 2.7.2.5.5.0 - Basic Writeup - Wikipedia - Subdivision surface
Section 2.7.2.5.5.1 - Catmull–Clark subdivision surface
Section 2.7.2.5.5.2 - Doo–Sabin subdivision surface
Section 2.7.2.5.5.3 - Loop subdivision surface
Section 2.7.2.5.6 - Slerp
Section 2.7.2.5.7 - Irrational base discrete weighted transform
Section 2.7.2.5.8 - Nevanlinna–Pick interpolation
Section 2.7.2.5.8.0 - Basic Writeup - Wikipedia - Nevanlinna–Pick interpolation
Section 2.7.2.5.8.1 - Pick matrix
Section 2.7.2.5.9 - Multivariate interpolation
Section 2.7.2.5.9.0 - Basic Writeup - Wikipedia - Multivariate interpolation
Section 2.7.2.5.9.1 - Barnes interpolation
Section 2.7.2.5.9.2 - Coons surface
Section 2.7.2.5.9.3 - Lanczos resampling
Section 2.7.2.5.9.4 - Natural neighbor interpolation
Section 2.7.2.5.9.5 - Nearest neighbor value interpolation
Section 2.7.2.5.9.6 - PDE surface
Section 2.7.2.5.9.7 - Transfinite interpolation
Section 2.7.2.5.9.8 - Trend surface analysis
Section 2.7.2.5.9.9 - Polynomial interpolation
Section 2.7.2.6 - Approximation theory
Section 2.7.2.6.0 - Basic Writeup - Wikipedia - Approximation theory
Section 2.7.2.6.1 - Orders of approximation
Section 2.7.2.6.2 - Lebesgue's lemma
Section 2.7.2.6.3 - Curve fitting
Section 2.7.2.6.3.0 - Basic Writeup - Wikipedia - Curve fitting
Section 2.7.2.6.3.1 - Vector field reconstruction
Section 2.7.2.6.4 - Modulus of continuity
Section 2.7.2.6.5 - Least squares (function approximation)
Section 2.7.2.6.6 - Minimax approximation algorithm
Section 2.7.2.6.6.0 - Basic Writeup Minimax approximation algorithm
Section 2.7.2.6.6.1 - Equioscillation theorem
Section 2.7.2.6.7 - Unisolvent point set
Section 2.7.2.6.8 - Approximation by polynomials
Section 2.7.2.6.8.1 - Linear approximation
Section 2.7.2.6.8.2 - Bernstein polynomial
Section 2.7.2.6.8.3 - Bernstein's constant
Section 2.7.2.6.8.4 - Remez algorithm
Section 2.7.2.6.8.5 - Bernstein's inequality (mathematical analysis)
Section 2.7.2.6.8.6 - Mergelyan's theorem
Section 2.7.2.6.8.7 - Müntz–Szász theorem
Section 2.7.2.6.8.8 - Bramble–Hilbert lemma
Section 2.7.2.6.8.9 - Discrete Chebyshev polynomials
Section 2.7.2.6.8.10 - Favard's theorem
Section 2.7.2.6.9 - Approximation by Fourier series / trigonometric polynomials
Section 2.7.2.6.9.1 - Jackson's inequality
Section 2.7.2.6.9.1.0 - Basic Writeup - Wikipedia - Jackson's inequality
Section 2.7.2.6.9.1.1 - Bernstein's theorem (approximation theory)
Section 2.7.2.6.9.2 - Fejér's theorem
Section 2.7.2.6.9.3 - Erdős–Turán inequality
Section 2.7.2.6.10 - Different approximations
Section 2.7.2.6.10.1 - Moving least squares
Section 2.7.2.6.10.2 - Padé approximant
Section 2.7.2.6.10.2.0 - Basic Writeup - Wikipedia - Padé approximant
Section 2.7.2.6.10.2.1 - Padé table
Section 2.7.2.6.10.3 - Hartogs–Rosenthal theorem
Section 2.7.2.6.10.4 - Szász–Mirakyan operator
Section 2.7.2.6.10.5 - Szász–Mirakjan–Kantorovich operator
Section 2.7.2.6.10.6 - Baskakov operator
Section 2.7.2.6.10.7 - Favard operator
Section 2.7.2.6.11 - Surrogate model
Section 2.7.2.6.12 - Constructive function theory
Section 2.7.2.6.13 - Universal differential equation
Section 2.7.2.6.14 - Fekete problem
Section 2.7.2.6.15 - Carleman's condition
Section 2.7.2.6.16 - Krein's condition
Section 2.7.2.6.17 - Lethargy theorem
Section 2.7.2.6.18 - Wirtinger's representation and projection theorem
Section 2.7.2.7 - Miscellaneous
Section 2.7.2.7.1 - Extrapolation
Section 2.7.2.7.1.0 - Basic Writeup - Wikipedia - Extrapolation
Section 2.7.2.7.1.1 - Linear predictive analysis
Section 2.7.2.7.2 - Unisolvent functions
Section 2.7.2.7.3 - Regression analysis
Section 2.7.2.7.3.0 - Basic Writeup - Wikipedia - Regression analysis
Section 2.7.2.7.3.1 - Isotonic regression
Section 2.7.2.7.4 - Curve-fitting compaction
Section 2.7.3.0 - Basic Writeup - Wikipedia - Error Analysis
Section 2.7.3.1 - Approximation
Section 2.7.3.2 - Types of Errors
Section 2.7.3.2.1 - Approximation Error
Section 2.7.3.2.2 - Discretization error
Section 2.7.3.2.3 - Numerical error
Section 2.7.3.2.4 - Round-off error
Section 2.7.3.2.5 - Truncation error
Section 2.7.3.2.6 - False Precision
Section 2.7.3.3 - Condition number
Section 2.7.3.4 - Loss of significance
Section 2.7.3.5 - Numerical stability
Section 2.7.3.6 - Affine Arithmetic
Section 2.7.3.7 - Relative change and difference
Section 2.7.3.8 - Error propagation
Section 2.7.3.8.1 - Propagation of uncertainty
Section 2.7.3.8.2 - Significance arithmetic
Section 2.7.3.8.3 - Residual (numerical analysis)