Save results with pickle file becomes slow in case of a large number of time steps
Opened this issue · 1 comments
nicolasdlss commented
When the number of time steps becomes large, e.g. 10 000, the saving to a pickle file becomes slow and takes a lot of memory:
Current approach
- CoCoNuT stores data in memory with among others numpy array with displacement, pressure and traction for every time step
- CoCoNuT writes data to pickle file (binary) based on save_results parameter
For large number of time steps:
- Large amount of memory required
- Writing of pickle file takes a long time
- Appending to large numpy array/lists takes a long time
nicolasdlss commented
Additionally, the parameter save_results
can be changed to write_results
to better reflect its function