The first part is about using the Gradient descent method to solve a minimization problem
We find the optimal step (leading to the fewest iterations) for which the method converges
When the step size is too small the algorithm needs more iterations to converge. When it is too large the algorithm does not converge
The error compared to numpy linalg solve decreases as the number of iteration increases. It increases as the dimension of the problem increases.
In this part we explore two penalization methods. We consider a path between an origin and an endpoint and we try to find the shortest path according to constraints
Let be the path (collection of points in a square of length 1), be the function to minimize, the constraint.
We define penalization functions such that we avoid the obstacle(s). We discretize every function and implement them.
As becomes smaller, the penalization becomes larger and the constraint is respected.
The algorithm goes through the obstacle until is large enough.
There seems to be a threshold phenomenon with the second penalization method where it will pass through the obstacle until penalization parameter is large enough to respect the constraint
The first penalization method is an external penalization method because the minimiser of does not necessarily verify the constraints, only in an approximate manner. And the penalization fonction is 0 only in the admissible set of points. Whereas, the second method is an internal penalization method. The minimizer of verifies the constraints and the penalization works when we approach the constraints, not just when we violate it.
We observe the behavior when we add obstacles