This is a Snakemake workflow for propagating hippocampal segmentations from Jordan DeKraker's AutoTop (@jordandekraker): https://github.com/jordandekraker/Hippocampal_AutoTop It requires pre-processed T2SPACE data, and makes use of transforms from ANTS buildtemplate on the SNSX32 dataset.
- Jonathan C. Lau (@jclauneuro)
Install Snakemake using conda:
conda create -c bioconda -c conda-forge -n snakemake snakemake
For installation details, see the instructions in the Snakemake documentation.
Configure the workflow according to your needs via editing the file config.yaml
, and editing the participants.tsv
Test your configuration by performing a dry-run via
snakemake -np
There are a few different ways to execute the workflow:
- Execute the workflow locally using an interactive job
- Execute the workflow using the
cc-slurm
profile
Execute the workflow locally using an interactive job:
salloc --time=3:00:00 --gres=gpu:t4:1 --cpus-per-task=8 --ntasks=1 --mem=32000 --account=YOUR_CC_ACCT srun snakemake --use-singularity --cores 8 --resources gpus=1 mem_mb=32000
The cc-slurm profile sets up default options for running on compute canada systems. More info in the README here: https://github.com/khanlab/cc-slurm
If you haven't used it before, deploy the cc-slurm profile using:
cookiecutter gh:khanlab/cc-slurm -o ~/.config/snakemake -f
Note: you must have cookiecutter installed (e.g. pip install cookiecutter
)
Then to execute the workflow for all subjects, submitting a job for each rule group, use:
snakemake --profile cc-slurm
In summary, a useful series of commands leading up to submission on a cluster:
snakemake -np
snakemake --dag | dot -Tpdf > dag.pdf
snakemake --profile cc-slurm --use-singularity
To export files to dropbox, use: snakemake -s export_dropbox.smk
See the Snakemake documentation for further details.
After successful execution, you can create a self-contained interactive HTML report with all results via:
snakemake --report report.html
This report can, e.g., be forwarded to your collaborators.
The following recipe provides established best practices for running and extending this workflow in a reproducible way.
- Fork the repo to a personal or lab account.
- Clone the fork to the desired working directory for the concrete project/run on your machine.
- Create a new branch (the project-branch) within the clone and switch to it. The branch will contain any project-specific modifications (e.g. to configuration, but also to code).
- Modify the config, and any necessary sheets (and probably the workflow) as needed.
- Commit any changes and push the project-branch to your fork on github.
- Run the analysis.
- Optional: Merge back any valuable and generalizable changes to the upstream repo via a pull request. This would be greatly appreciated.
- Optional: Push results (plots/tables) to the remote branch on your fork.
- Optional: Create a self-contained workflow archive for publication along with the paper (snakemake --archive).
- Optional: Delete the local clone/workdir to free space.
No test cases yet