jbalma
This is my personal github account; as such the contributions, views and opinions expressed here do not represent those of my employer.
Maxwell LabsMinnesota
Pinned Repositories
BoltzmannBaby
BoltzmannBaby is a C/C++ OpenMP-4.0 RBM-based deep learning research code used to understand the underlying thermodynamic properties of deep networks in terms of Temperature, Pressure and Volume -- as well as energy density and entropy currents. It uses general binary input structures by way of binning up (mostly) arbitrary 2-d data structures. Almost any 2-d data can be used as inputs through the use of the binning mechanism. Two binning subroutines are currently available: one for time-series or spatial location data (e.g. images or f(x) style data), and another for character-based data (raw text) in which characters are represented by the y-axis of the matrix, and the spatial location in a sentence or word is represented by the x axis. The current benchmark problems use either the character/text based data, or else a noisy sinusoidal function. The learning rate is automatically adjusted using a simple AdaGrad technique. The bias neurons are updated each epoch or kept fixed depending on user preference. A number of shifted sub-samples can be used to enlarge the data set (useful in text based learning). The default setup uses a set of early childhood reading samples, fables, Kafka short stories to train layer wise each RBM. Arbitrary numbers of additional RBMs can be stacked in the chain, but the default configuration uses eight. Full documentation is in the works.
ck
Collective Knowledge framework helps to convert ad-hoc code, data and scripts into portable, customizable and reusable components with a simple Python API and an integrated package manager for Linux, MacOS, Windows and Android; assemble automated workflows; crowdsource complex experiments; generate interactive papers, etc:
CNTK
Computational Network Toolkit (CNTK)
cosmoflow-benchmark
Benchmark implementation of CosmoFlow in TensorFlow Keras
deep500
A Deep Learning Meta-Framework and HPC Benchmarking Library
dmlc-core
A common bricks library for building scalable and portable distributed machine learning.
graph_nets
Build Graph Nets in Tensorflow
mesh-transformer-mpi
ML-Perf HPC WG Implementation of Mesh-Tensorflow and (buildscripts) for Tensorflow with MPI
pharml
PharML is a framework for predicting compound affinity for protein structures. It utilizes a novel Molecular-Highway Graph Neural Network (MH-GNN) architecture based on state-of-the-art techniques in deep learning. This repository contains the visualization, preprocessing, training, and inference code written in Python and C. In addition, we provide an ensemble of pre-trained models which can readily be used for quickly generating rank-ordered predictions of compound affinity relative to a given target. DISCLAIMER: Compounds predicted by PharML.Bind should not be used without consulting a doctor or pharmacist - all results should be considered unverified and used only as a starting point for further investigation. Use at your own risk!
resnet101_tencent_distributed
ResNet101 Training on TenCent Images using various distributed frameworks
jbalma's Repositories
jbalma/pharml
PharML is a framework for predicting compound affinity for protein structures. It utilizes a novel Molecular-Highway Graph Neural Network (MH-GNN) architecture based on state-of-the-art techniques in deep learning. This repository contains the visualization, preprocessing, training, and inference code written in Python and C. In addition, we provide an ensemble of pre-trained models which can readily be used for quickly generating rank-ordered predictions of compound affinity relative to a given target. DISCLAIMER: Compounds predicted by PharML.Bind should not be used without consulting a doctor or pharmacist - all results should be considered unverified and used only as a starting point for further investigation. Use at your own risk!
jbalma/mesh-transformer-mpi
ML-Perf HPC WG Implementation of Mesh-Tensorflow and (buildscripts) for Tensorflow with MPI
jbalma/BoltzmannBaby
BoltzmannBaby is a C/C++ OpenMP-4.0 RBM-based deep learning research code used to understand the underlying thermodynamic properties of deep networks in terms of Temperature, Pressure and Volume -- as well as energy density and entropy currents. It uses general binary input structures by way of binning up (mostly) arbitrary 2-d data structures. Almost any 2-d data can be used as inputs through the use of the binning mechanism. Two binning subroutines are currently available: one for time-series or spatial location data (e.g. images or f(x) style data), and another for character-based data (raw text) in which characters are represented by the y-axis of the matrix, and the spatial location in a sentence or word is represented by the x axis. The current benchmark problems use either the character/text based data, or else a noisy sinusoidal function. The learning rate is automatically adjusted using a simple AdaGrad technique. The bias neurons are updated each epoch or kept fixed depending on user preference. A number of shifted sub-samples can be used to enlarge the data set (useful in text based learning). The default setup uses a set of early childhood reading samples, fables, Kafka short stories to train layer wise each RBM. Arbitrary numbers of additional RBMs can be stacked in the chain, but the default configuration uses eight. Full documentation is in the works.
jbalma/resnet101_tencent_distributed
ResNet101 Training on TenCent Images using various distributed frameworks
jbalma/ck
Collective Knowledge framework helps to convert ad-hoc code, data and scripts into portable, customizable and reusable components with a simple Python API and an integrated package manager for Linux, MacOS, Windows and Android; assemble automated workflows; crowdsource complex experiments; generate interactive papers, etc:
jbalma/CNTK
Computational Network Toolkit (CNTK)
jbalma/cosmoflow-benchmark
Benchmark implementation of CosmoFlow in TensorFlow Keras
jbalma/deep500
A Deep Learning Meta-Framework and HPC Benchmarking Library
jbalma/dmlc-core
A common bricks library for building scalable and portable distributed machine learning.
jbalma/graph_nets
Build Graph Nets in Tensorflow
jbalma/heptrkx-gnn-tracking
jbalma/QE-GPU
GPU-accelerated Quantum ESPRESSO
jbalma/results
Public results
jbalma/smclocalize
jbalma/string-transport
Solver for cosmic string transport equation.
jbalma/tencent-ml-images
Largest multi-label image database; ResNet-101 model; 80.73% top-1 acc on ImageNet
jbalma/training
Reference implementations of training benchmarks