Term 5: Optimization methods lab works.
No gradient needed. Basically, just sophisticated search algorithms
- Scalar optimization
- dumb section
- fibonacci section
- gold section
- Hooke-Jeeves
- Nelder-Mead
- Evolution method(s)
A gradient and a hessian matrix needed
- Gradient descent
- dumb
- with dynamic "learning rate"
- the "steepest descent" algorithm
- the conjugate gradients method
- Newtonian descent
- dumb
- with dynamic "learning rate"
// to be announced
// to be announced