This repository contains the code, notebooks, and links to the datasets accompanying:
- The article Machine Learning Pipelines with Modern Big DataTools for High Energy Physics
- The blog entry Machine Learning Pipelines for High Energy Physics Using Apache Spark with BigDL and Analytics Zoo
Authors and credits:
- Principal author of the notebooks: Matteo.Migliorini@cern.ch
- Authors and contacts: Luca.Canali@cern.ch; Riccardo.Castellotti@cern.ch; Matteo.Migliorini@cern.ch
- Original research article, raw data and neural network models by: T.Q. Nguyen et al., Comput Softw Big Sci (2019) 3: 12
- Acknowledgements: Viktor Khristenko, Thong Nguyen, Maurizio Pierini, Maria Girone, Marco Zanetti, members of the Hadoop and Spark service at CERN, CMS Bigdata project, Intel team for BigDL and Analytics Zoo consultancy: Jiao (Jennie) Wang and Sajan Govindan.
Data and code to reproduce this work are made available via this repository.
- Datasets: links to download and short description
- Notebooks with data preparation code using Apache Spark
- Notebooks with machine learning training
- Distributed DL training with Apache Spark and BigDL/AnalyticsZoo
- Training DL models with TensorFlow (tf.keras):
- Other ML training: using Spark ML
Event data flows collected from the particle detector (CMS experiment) contains different types
of event topologies of interest.
A particle classifier built with neural networks can be used as event filter,
improving state of the art in accuracy.
This work reproduces the findings of the paper
Topology classification with deep learning to improve real-time event selection at the LHC
using tools from the Big Data ecosystem, notably Apache Spark and BigDL/Analytics Zoo.
Data pipelines are of paramount importance to make machine learning projects successful, by integrating multiple components and APIs used for data processing across the entire data chain. A good data pipeline implementation can accelerate and improve the productivity of the work around the core machine learning tasks. The four steps of the pipeline we built are:
- Data Ingestion: where we read data from ROOT format and from the CERN-EOS storage system, into a Spark DataFrame and save the results as a table stored in Apache Parquet files
- Feature Engineering and Event Selection: where the Parquet files containing all the events details processed in Data Ingestion are filtered and datasets with new features are produced
- Parameter Tuning: where the best set of hyperparameters for each model architecture are found performing a grid search
- Training: where the best models found in the previous step are trained on the entire dataset.
The results of the DL model(s) training are satisfactoy and match the results of the original research paper.
- Article "Machine Learning Pipelines with Modern Big DataTools for High Energy Physics"
- Blog post "Machine Learning Pipelines for High Energy Physics Using Apache Spark with BigDL and Analytics Zoo"
- Poster at the CERN openlab technical workshop 2019
- Presentation at Spark Summit SF 2019
- Presentation at CERN EP-IT Data science seminar