Five notebooks are currently available:
- For a brief introduction to the NPZD model see this notebook.
- The EnKF parameter estimation notebook demonstrates parameter optimization using a stochastic Ensemble Kalman Filter (SEnKF), allowing the user to change the parameter estimation setup, the observations, or the EnKF configuration. It further contains a few preconfigured scenarios that highlight parameter optimization-related issues such as parameter interdependence or underdetermined parameters.
- The EnKF state estimation notebook focuses on state estimation but is otherwise very similar to the EnKF parameter estimation notebook. It contains no predefined scenarios. There is a Matlab version available for this notebook which does not run online, but otherwise provides the same functionality as its Python equivalent.
- The evolutionary algorithm parameter estimation notebook introduces a different parameter estimation technique. It is based on a differential evolution evolutionary algorithm.
Instead of cloning this repository or downloading the code, the notebooks can be run online using binder. Just follow this link.
- Instead of running the notebooks, open up the (static) notebooks on this webpage (linked above).
- Note the shape of the cost function and the strong interdependence between the phytoplankton growth and mortality parameters in the EnKF parameter estimation notebook.
- Run the notebooks online.
- Select the EnKF parameter estimation notebook (parameter_estimation_enkf.ipynb), and run a few of the scenarios.
- Try out one of the other notebooks: the evolutionary algorithm parameter estimation notebook if you are more interested in parameter estimation, or the EnKF state estimation notebook.
- For state estimation: modify the number of observation and the observed variables and note the effect on the estimation result.
- For evolutionary algorithm parameter estimation: remove the pseudo-random number generator seed and run the notebook a few times. The variability in the results explains why this type of algorithm is typically run multiple times, often with different initial estimates. Can the variability be reduced by improving the initial parameter estimates?
- Fork this repository or download the code and run it locally.
- Modify one of the EnKF notebooks to perform both state and parameter estimation together.
Feel free use GitHub's discussions feature (link above) to provide feedback, or open an issue to report problems.
This project is licensed under the MIT License - see the LICENSE file for details.