generate_sample_info.py
Create /scerarios
directory and a subdirectory for each sample and realization.
mkdir scenarios
mkdir xdd_parquet
cd scenarios
for i in {1..100}; do for j in {1..10}; do mkdir S${i}_${j}; done; done
Create virtual links to all files needed, by executing link_to_inputs.sh
.
These flows are used to trigger adaptive demands. Creates one for every state of the world and every realization.
realization_flows.py
Can be run in parallel using realization_flows.sh
(< this script has been edited and needs to be checked).
This is handled by curtailment_scaling.py
which can be run in parallel (in batches) using curtailment_scaling_expanse.sh
.
This produces a compressed .parquet
file, by converting the .xdd
output file of each run.
Scripts in data_extraction.py
can be used to:
- create combined files for each user for each realization with all 600 rules together
- create combined files for each user for all realizations and all rules (these files end up being very large and inconvenient to work with)