The results are sparated into several experiments, each discussing a particular aspect of the evaluation. They can be browsed directly on github:
# | script/folder | description |
---|---|---|
1 | 📁datasets |
Datasets containing trajectories |
2 | 📁metrics |
Metrics to compare two trajectories |
3 | 📁methods |
Trajectory inference methods |
4 | 📁method_testing |
Quick testing of methods using small datasets |
5 | 📁scaling |
Scalability with increasing number of cells and features |
6 | 📁benchmark |
Accuracy of TI methods on real and synthetic data |
7 | 📁stability |
Stability of the inferred trajectory |
8 | 📁summary |
Summarising the results into funky heatmaps |
9 | 📁guidelines |
Guidelines for method users |
10 | 📁benchmark_interpretation |
Benchmark interpretation |
11 | 📁example_predictions |
Examples trajectories |
12 | 📁manuscript |
Manuscript |
The actual code for generating the results can be found in the scripts folder.