/wgs_somatic_cnv_sv_viper

Workflow to call structural and copy number variants in somatic whole genome data

Primary LanguagePythonMIT LicenseMIT

wgs_somatic_cnv_sv_viper

Workflow to call structural and copy number variants in somatic whole genome data

Snakefmt License: MIT

💬 Introduction

This snakemake workflow takes .bam files, which were prepped according to GATK best practices, and calls CNVs and SVs. The workflow can process tumor samples paired with normals or be run as a tumor-only analysis.

CNVkit

This tool is best used with a panel of normals (PoN) which should be generated according to the docs.

CNVnator

CNVnator runs tumor only according to the docs in the repo.

Manta

Manta can be run in tumor only or tumor/normal mode. Please refer to the docs.

TIDDIT

The tool is running tumor only as described in the repo.

❗ Dependencies

To run this workflow, the following tools need to be available:

python snakemake singularity

🎒 Preparations

Sample data

  1. Add all sample ids to samples.tsv in the column sample.
  2. Add sample type information, normal or tumor, to units.tsv.
  3. Use the analysis_output folder from wgs_std_viper as input.
  4. If a PoN was not created earlier, use the analysis_output folder from wgs_somatic_pon as input. as input.

Reference data

  1. You need a reference .fasta file representing the genome used for mapping. In addition, an index file is required.
  • The required files for the human reference genome GRCh38 can be downloaded from google cloud. The download can be manually done using the browser or using gsutil via the command line:
gsutil cp gs://genomics-public-data/resources/broad/hg38/v0/Homo_sapiens_assembly38.fasta /path/to/download/dir/
  • If those resources are not available for your reference you may generate them yourself:
samtools faidx /path/to/reference.fasta
  1. CNVkit requires a panel of normals (PoN) which should be supplied. If you do not have a PoN you can simply leave "" instead to link the workflow to the output from wgs_somatic_pon.
  2. This workflow is setup to filter the resulting .vcf files from CNVnator, Manta and TIDDIT. If this is undesired one could simply use an empty .bed file for filtering. Otherwise, the SweGen database is a great resource as it contains specific .bed files with normal variants for each of the three tools.
  3. Add the paths of the different files to the config.yaml. The index file should be in the same directory as the reference .fasta.
  4. Make sure that the docker container versions are correct.

✅ Testing

The workflow repository contains a small test dataset .tests/integration which can be run like so:

cd .tests/integration
snakemake -s ../../Snakefile -j1 --use-singularity

🚀 Usage

The workflow is designed for WGS data meaning huge datasets which require a lot of compute power. For HPC clusters, it is recommended to use a cluster profile and run something like:

snakemake -s /path/to/Snakefile --profile my-awesome-profile

🧑‍⚖️ Rule Graph

rule_graph