Pinned Repositories
annotated-transformer
http://nlp.seas.harvard.edu/2018/04/03/attention.html
apex
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
asitop
Perf monitoring CLI tool for Apple Silicon
benchmarks
A benchmark framework for Tensorflow
cnn-explainer
Learning Convolutional Neural Networks with Interactive Visualization.
code-samples
Source code examples from the Parallel Forall Blog
configuration
Like some files bro
cuda-python
CUDA Python Low-level Bindings
CUDALibrarySamples
CUDA Library Samples
cudf
cuDF - GPU DataFrame Library
dominicshanshan's Repositories
dominicshanshan/annotated-transformer
http://nlp.seas.harvard.edu/2018/04/03/attention.html
dominicshanshan/apex
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
dominicshanshan/asitop
Perf monitoring CLI tool for Apple Silicon
dominicshanshan/cnn-explainer
Learning Convolutional Neural Networks with Interactive Visualization.
dominicshanshan/code-samples
Source code examples from the Parallel Forall Blog
dominicshanshan/cuda-python
CUDA Python Low-level Bindings
dominicshanshan/CUDALibrarySamples
CUDA Library Samples
dominicshanshan/cudf
cuDF - GPU DataFrame Library
dominicshanshan/cupy
NumPy & SciPy for GPU
dominicshanshan/cuscipy
dominicshanshan/cvxpy
A Python-embedded modeling language for convex optimization problems.
dominicshanshan/DeepLOB-Deep-Convolutional-Neural-Networks-for-Limit-Order-Books
This jupyter notebook is used to demonstrate our recent work, "DeepLOB: Deep Convolutional Neural Networks for Limit Order Books", published in IEEE Transactions on Singal Processing. We use FI-2010 dataset and present how model architecture is constructed here. The FI-2010 is publicly avilable and interested readers can check out their paper.
dominicshanshan/distributed_training
dominicshanshan/fil_backend
FIL backend for the Triton Inference Server
dominicshanshan/former
Simple transformer implementation from scratch in pytorch.
dominicshanshan/legate.numpy
An Aspiring Drop-In Replacement for NumPy at Scale
dominicshanshan/machine-learning-for-trading
Code for Machine Learning for Algorithmic Trading, 2nd edition.
dominicshanshan/nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
dominicshanshan/nccl
Optimized primitives for collective multi-GPU communication
dominicshanshan/NeMo
NeMo: a toolkit for conversational AI
dominicshanshan/ocropus4
dominicshanshan/P-tuning
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
dominicshanshan/scipy
SciPy library main repository
dominicshanshan/TabFormer
Code & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
dominicshanshan/tensorflow_macos
TensorFlow for macOS 11.0+ accelerated using Apple's ML Compute framework.
dominicshanshan/tf-quant-finance
High-performance TensorFlow library for quantitative finance.
dominicshanshan/transformer
Implementation of "Attention Is All You Need" using pytorch
dominicshanshan/transformer_develop
dominicshanshan/transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.
dominicshanshan/tutorial-multi-gpu
Efficient Distributed GPU Programming for Exascale, an SC/ISC Tutorial