Pinned Repositories
farm
Family of AutoRegressive Models
Battlecode2019
Battlecode2020
Battlecode2021
Battlecode2022
FPointNet
Image-Processing-Java
MessengerMod
JRMOT_ROS
Source code for JRMOT: A Real-Time 3D Multi-Object Tracker and a New Large-Scale Dataset
tjmachinelearning
Official TJHSST Machine Learning Club Website Repository
mvpatel2000's Repositories
mvpatel2000/Battlecode2019
mvpatel2000/Battlecode2022
mvpatel2000/FPointNet
mvpatel2000/Image-Processing-Java
mvpatel2000/MessengerMod
mvpatel2000/AITemplate
AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
mvpatel2000/assassins-assigner
mvpatel2000/composer
library of algorithms to speed up neural network training
mvpatel2000/diffusion-benchmark
mvpatel2000/DynamicEnvRL
mvpatel2000/examples
Fast and flexible reference benchmarks
mvpatel2000/ExploraVision
mvpatel2000/ffcv
FFCV: Fast Forward Computer Vision (and other ML workloads!)
mvpatel2000/flash-attention
Fast and memory-efficient exact attention
mvpatel2000/frustum-pointnets
Frustum PointNets for 3D Object Detection from RGB-D Data
mvpatel2000/GovML
mvpatel2000/grouped_gemm
PyTorch bindings for CUTLASS grouped GEMM.
mvpatel2000/keras-frcnn
mvpatel2000/llm-analysis
Latency and Memory Analysis of Transformer Models for Training and Inference
mvpatel2000/lm-evaluation-harness
A framework for few-shot evaluation of language models.
mvpatel2000/Lux-Design-S2
Repository for the Lux AI Challenge, season 2
mvpatel2000/megablocks
mvpatel2000/pytest-codeblocks
:page_facing_up: Test code blocks in your READMEs
mvpatel2000/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
mvpatel2000/search-stack
Appleseed Search Stack Docker composition. Uses Solr, Elasticsearch, MongoDB, Mono, DotNet, ASPNet, NGINX, MySQL, PostgreSQL
mvpatel2000/stk
mvpatel2000/tensorflow
Computation using data flow graphs for scalable machine learning
mvpatel2000/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
mvpatel2000/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
mvpatel2000/triton
Development repository for the Triton language and compiler