ADAM
Introduction
ADAM is a library and command line tool that enables the use of Apache Spark to parallelize genomic data analysis across cluster/cloud computing environments. ADAM uses a set of schemas to describe genomic sequences, reads, variants/genotypes, and features, and can be used with data in legacy genomic file formats such as SAM/BAM/CRAM, BED/GFF3/GTF, and VCF, as well as data stored in the columnar Apache Parquet format. On a single node, ADAM provides competitive performance to optimized multi-threaded tools, while enabling scale out to clusters with more than a thousand cores. ADAM's APIs can be used from Scala, Java, Python, R, and SQL.
Why ADAM?
Over the last decade, DNA and RNA sequencing has evolved from an expensive, labor intensive method to a cheap commodity. The consequence of this is generation of massive amounts of genomic and transcriptomic data. Typically, tools to process and interpret these data are developed with a focus on excellence of the results generated, not on scalability and interoperability. A typical sequencing workflow consists of a suite of tools from quality control, mapping, mapped read preprocessing, to variant calling or quantification, depending on the application at hand. Concretely, this usually means that such a workflow is implemented as tools glued together by scripts or workflow descriptions, with data written to files at each step. This approach entails three main bottlenecks:
- scaling the workflow comes down to scaling each of the individual tools,
- the stability of the workflow heavily depends on the consistency of intermediate file formats, and
- writing to and reading from disk is a major slow-down.
We propose here a transformative solution for these problems, by replacing ad-hoc workflows by the ADAM framework, developed in the Apache Spark ecosystem.
ADAM enables the high performance in-memory cluster computing functionality of Apache Spark on genomic data, ensuring efficient and fault-tolerant distribution based on data parallelism, without the intermediate disk operations required in traditional distributed approaches.
Furthermore, the ADAM and Apache Spark approach comes with an additional benefit. Typically, the endpoint of a sequencing pipeline is a file with processed data for a single sample: e.g. variants for DNA sequencing, read counts for RNA sequencing, etc. The real endpoint, however, of a sequencing experiment initiated by an investigator is interpretation of these data in a certain context. This usually translates into (statistical) analysis of multiple samples, connection with (clinical) metadata, and interactive visualization, using data science tools such as R, Python, Tableau and Spotfire. In addition to scalable distributed processing, Apache Spark also allows interactive data analysis in the form of analysis notebooks (Spark Notebook, Jupyter, or Zeppelin), or direct connection to the data in R and Python.
Getting Started
Building from Source
You will need to have Apache Maven version 3.1.1 or later installed in order to build ADAM.
Note: The default configuration is for Hadoop 2.7.3. If building against a different version of Hadoop, please pass
-Dhadoop.version=<HADOOP_VERSION>
to the Maven command. ADAM will cross-build for both Spark 1.x and 2.x, but builds by default against Spark 1.6.3 and Scala 2.10. To build for Spark 2, run the./scripts/move_to_spark_2.sh
script. To build for Scala 2.11, run the./scripts/move_to_scala_2.11.sh
script.
$ git clone https://github.com/bigdatagenomics/adam.git
$ cd adam
$ mvn clean package -DskipTests
Outputs
...
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 9.647s
[INFO] Finished at: Thu May 23 15:50:42 PDT 2013
[INFO] Final Memory: 19M/81M
[INFO] ------------------------------------------------------------------------
You might want to take a peek at the scripts/jenkins-test
script and give it a run. It will fetch a mouse chromosome, encode it to ADAM
reads and pileups, run flagstat, etc. We use this script to test that ADAM is working correctly.
Installing Spark
You'll need to have a Spark release on your system and the $SPARK_HOME
environment variable pointing at it; prebuilt binaries can be downloaded from the
Spark website. Currently, our continuous builds default to
Spark 1.6.1 built against Hadoop 2.6, but any more recent Spark distribution should also work.
Documentation
ADAM's documentation is hosted at readthedocs.
The ADAM/Big Data Genomics Ecosystem
ADAM builds upon the open source Apache Spark, Apache Avro, and Apache Parquet projects. Additionally, ADAM can be deployed for both interactive and production workflows using a variety of platforms.
There are a number of tools built using ADAM's core APIs:
- Avocado is a variant caller built on top of ADAM for germline and somatic calling
- Cannoli uses ADAM's pipe API to parallelize common single-node genomics tools (e.g., BWA, bowtie2, FreeBayes)
- DECA is a reimplementation of the XHMM copy number variant caller on top of ADAM/Apache Spark
- Gnocchi provides primitives for running GWAS/eQTL tests on large genotype/phenotype datasets using ADAM
- Lime provides a parallel implementation of genomic set theoretic primitives using the region join API
- Mango is a library for visualizing large scale genomics data with interactive latencies and serving data using the GA4GH schemas
Connecting with the ADAM team
The best way to reach the ADAM team is to post in our Gitter channel or to open an issue on our Github repository. For more contact methods, please see our support page.
License
ADAM is released under the Apache License, Version 2.0.
Citing ADAM
ADAM has been described in two manuscripts. The first, a tech report, came out in 2013 and described the rationale behind using schemas for genomics, and presented an early implementation of some of the preprocessing algorithms. To cite this paper, please cite:
@techreport{massie13,
title={{ADAM}: Genomics Formats and Processing Patterns for Cloud Scale Computing},
author={Massie, Matt and Nothaft, Frank and Hartl, Christopher and Kozanitis, Christos and Schumacher, Andr{\'e} and Joseph, Anthony D and Patterson, David A},
year={2013},
institution={UCB/EECS-2013-207, EECS Department, University of California, Berkeley}
}
The second, a conference paper, appeared in the SIGMOD 2015 Industrial Track. This paper described how ADAM's design was influenced by database systems, expanded upon the concept of a stack architecture for scientific analyses, presented more results comparing ADAM to state-of-the-art single node genomics tools, and demonstrated how the architecture generalized beyond genomics. To cite this paper, please cite:
@inproceedings{nothaft15,
title={Rethinking Data-Intensive Science Using Scalable Analytics Systems},
author={Nothaft, Frank A and Massie, Matt and Danford, Timothy and Zhang, Zhao and Laserson, Uri and Yeksigian, Carl and Kottalam, Jey and Ahuja, Arun and Hammerbacher, Jeff and Linderman, Michael and Franklin, Michael and Joseph, Anthony D. and Patterson, David A.},
booktitle={Proceedings of the 2015 International Conference on Management of Data (SIGMOD '15)},
year={2015},
organization={ACM}
}
We prefer that you cite both papers, but if you can only cite one paper, we prefer that you cite the SIGMOD 2015 manuscript.