This repository contains quadratic programs (QPs) arising from model predictive control in robotics, in a format suitable for qpbenchmark. Here is the report produced by this benchmarking tool:
- đ MPC test set results
The recommended process is to install the benchmark and all solvers in an isolated environment using conda
:
conda env create -f environment.yaml
conda activate mpc_qpbenchmark
It is also possible to install the benchmark from PyPI.
HPIPM is not packaged, but instructions to install from source are given in hpipm:
- Clone BLASFEO:
git clone https://github.com/giaf/blasfeo.git
- From the BLASFEO directory, run:
make shared_library -j 4
- Check again that you are in your conda environment, then run:
cp -f ./lib/libblasfeo.so ${CONDA_PREFIX}/lib/
cp -f ./include/*.h ${CONDA_PREFIX}/include/
- Clone HPIPM:
git clone https://github.com/giaf/hpipm.git
- From the HPIPM directory, run:
make shared_library -j4 BLASFEO_PATH=${CONDA_PREFIX}
- Check again that you are in your conda environment, then run:
cp -f libhpipm.so ${CONDA_PREFIX}/lib/
cp -f ./include/*.h ${CONDA_PREFIX}/include/
- Go to
hpipm/interfaces/python/hpipm_python
and runpip install .
- Try to import the package in Python:
import hpipm_python.common as hpipm
Run the test set as follows:
python ./mpc_qpbenchmark.py run
The outcome is a standardized report comparing all available solvers against the different benchmark metrics. You can check out and post your own results in the Results forum.
The problems in this test set have been contributed by:
Problems | Contributor | Details |
---|---|---|
QUADCMPC* |
@paLeziart | Proposed in #1, details in this thesis |
LIPMWALK* |
@stephane-caron | Proposed in #3, details in this paper |
WHLIPBAL* |
@stephane-caron | Proposed in #4, details in this paper |
Here are some known areas of improvement for this benchmark:
- Cold start only: we don't evaluate warm-start performance for now.
- CPU thermal throttling: the benchmark currently does not check the status of CPU thermal throttling.
- Adding this feature is a good way to start contributing to the benchmark.
Note that this test set was spun off to benefit from the availability of qpbenchmark and readily-available MPC QPs, but it does not fully reflect the use of QP solvers for MPC in production due, notably, to the cold-start-only limitation.
If you use qpbenchmark
in your works, please cite all its contributors as follows:
@software{qpbenchmark2024,
title = {{qpbenchmark: Benchmark for quadratic programming solvers available in Python}},
author = {Caron, Stéphane and Zaki, Akram and Otta, Pavel and Arnström, Daniel and Carpentier, Justin and Yang, Fengyu and Leziart, Pierre-Alexandre},
url = {https://github.com/qpsolvers/qpbenchmark},
license = {Apache-2.0},
version = {2.3.0},
year = {2024}
}
If you contribute to this repository, don't forget to add yourself to the BibTeX above and to CITATION.cff
.
Related test sets that may be relevant to your use cases:
- Free-for-all: community-built test set, new problems welcome!
- Maros-Meszaros test set: a standard test set with problems designed to be difficult.