A lot of documentation exists but is directed mainly concerned with contributing new modules to the nf-core/modules repository. - https://nf-co.re/developers/tutorials/dsl2_modules_tutorial - https://github.com/nf-core/modules
I think the best resource for learning about the structure of modules and how they incorporate into workflows is the video https://youtu.be/ggGGhTMgyHI?t=1172 , starting at 19:30
It also looks like an additional helpful video will be released on September 7, 2021 : https://nf-co.re/events/2021/bytesize-19-dsl2-pipeline-starter
To add a new local module
Clone this repo and create a new branch to work on
Install nf-core
using conda:
e.g. both nf-core and nextflow
conda create --name nf-core python=3.7 nf-core nextflow
Navigate to the metaBenchmarks
directory.
To see if a module already exists from nf-core run:
nf-core modules list remote
If it exists you can install it using: nfcore modules install
e.g. for samtools/sort
nfcore modules install nf-core modules list remote
If a module doesn't exist then create your own using
nf-core modules create
When prompted for Name of tool/subtool:
, if, for example, you're creating a DIAMOND BLASTp module you would enter diamond
when prompted.
Set all benchmarking processes Process resource label
as process_high
. This will allow providing all benchmark processes the same resources.
When prompted Will the module require a meta map of sample information? (yes/no) [y/n] (y):
enter y
Benchmarking results are aggregated using the autometa-benchmark
entrypoint. This entrypoint will accept taxon-profiling and binning results. This entrypoint requires a standardized output format for each process. These are detailed below:
tab-delimited text file containing at least contig
and taxid
. The taxids should correspond to NCBI's taxids located in their taxdump tarball.
NOTE: contigs that were unable to be identified should be assigned the root taxid (taxid = 1)
Example file contents of taxon_profiling_tool_standardized_output.tsv
contig taxid <some_unnecessary_metadata_column>
contig_id_1 12345 ...
contig_id_2 5555 ...
...
This file can then be supplied to autometa-benchmark
like so,
autometa-benchmark \
--benchmark classification \
--predictions taxon_profiling_tool_standardized_output.tsv \
--reference <reference_file> \
--ncbi <path/to/ncbi/taxdump/databases/directory>
NOTE: The taxdump files from NCBI are only required for benchmarking with autometa-benchmark --benchmark classification
tab-delimited text file containing at least contig
and cluster
.
NOTE: contigs that were left unclustered should not have any value populate the cluster
column
Example file contents of binning_tool_standardized_output.tsv
contig cluster <some_unnecessary_metadata_column>
contig_id_1 bin_0001 ...
contig_id_2 bin_0002 ...
contig_id_2 ...
...
autometa-benchmark \
--benchmark binning-classification \
--predictions binning_tool_standardized_output.tsv \
--reference <reference_file>
Type: Binning of Contigs
Website: https://github.com/KwanLab/Autometa/releases/tag/1.0.2
Inputs:
- Nucleotide contigs Outputs:
Code to run individual module:
Type: Binning of Contigs
Website: https://sourceforge.net/projects/maxbin2/
Inputs:
- Nucleotide contigs
- Reads files (forward and reverse, used to calculate abundance data)
Outputs: maxbin2_output.001.fasta maxbin2_output.002.fasta maxbin2_output.003.fasta maxbin2_output.004.fasta maxbin2_output.abund1 maxbin2_output.abund2 maxbin2_output.abundance maxbin2_output.log maxbin2_output.marker maxbin2_output.marker_of_each_bin.tar.gz maxbin2_output.noclass maxbin2_output.summary maxbin2_output.tooshort
Code to run individual module (to run a module test script, cd to metaBenchmarks/ and run the test nextflow file from there):
nextflow run modules/local/tests/maxbin2_test.nf
nf-core/benchmark is a bioinformatics best-practice analysis pipeline for Benchmarking taxomonic profilers and metagenomic binners.
The pipeline is built using Nextflow, a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!
On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources. The results obtained from the full-sized test can be viewed on the nf-core website.
-
Install
Nextflow
(>=21.04.0
) -
Install any of
Docker
,Singularity
,Podman
,Shifter
orCharliecloud
for full pipeline reproducibility (please only useConda
as a last resort; see docs) -
Download the pipeline and test it on a minimal dataset with a single command:
nextflow run nf-core/benchmark -profile test,<docker/singularity/podman/shifter/charliecloud/conda/institute>
- Please check nf-core/configs to see if a custom config file to run nf-core pipelines already exists for your Institute. If so, you can simply use
-profile <institute>
in your command. This will enable eitherdocker
orsingularity
and set the appropriate execution settings for your local compute environment. - If you are using
singularity
then the pipeline will auto-detect this and attempt to download the Singularity images directly as opposed to performing a conversion from Docker images. If you are persistently observing issues downloading Singularity images directly due to timeout or network issues then please use the--singularity_pull_docker_container
parameter to pull and convert the Docker image instead. Alternatively, it is highly recommended to use thenf-core download
command to pre-download all of the required containers before running the pipeline and to set theNXF_SINGULARITY_CACHEDIR
orsingularity.cacheDir
Nextflow options to be able to store and re-use the images from a central location for future pipeline runs. - If you are using
conda
, it is highly recommended to use theNXF_CONDA_CACHEDIR
orconda.cacheDir
settings to store the environments in a central location for future pipeline runs.
- Please check nf-core/configs to see if a custom config file to run nf-core pipelines already exists for your Institute. If so, you can simply use
-
Start running your own analysis!
nextflow run nf-core/benchmark -profile <docker/singularity/podman/shifter/charliecloud/conda/institute> --input samplesheet.csv --genome GRCh37
The nf-core/benchmark pipeline comes with documentation about the pipeline usage, parameters and output.
nf-core/benchmark was originally written by .
We thank the following people for their extensive assistance in the development of this pipeline:
If you would like to contribute to this pipeline, please see the contributing guidelines.
For further information or help, don't hesitate to get in touch on the Slack #benchmark
channel (you can join with this invite).
An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md
file.
You can cite the nf-core
publication as follows:
The nf-core framework for community-curated bioinformatics pipelines.
Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.
Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.