t-vi
Principal Research Engineer at @Lightning-AI Mathematics and Inference at @MathInf I do a lot of @PyTorch work
MathInf https://mathinf.eu/Munich
Pinned Repositories
acdl2020
AICamera
Demonstration of using Caffe2 inside an Android application.
candlegp
Gaussian Processes in Pytorch
interesting-rates
Economic models and things in Pytorch
lod2021
PyTorch Tutorial at the LOD2021 conference
maskrcnn-benchmark
Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.
pytorch-tvmisc
Totally Versatile Miscellanea for Pytorch
warp-ctc
PyTorch bindings for Warp-CTC
t-vi's Repositories
t-vi/pytorch-tvmisc
Totally Versatile Miscellanea for Pytorch
t-vi/candlegp
Gaussian Processes in Pytorch
t-vi/acdl2020
t-vi/lod2021
PyTorch Tutorial at the LOD2021 conference
t-vi/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
t-vi/lit_torchdrift
Lightning module for TorchDrift
t-vi/kornia
Open Source Differentiable Computer Vision Library for PyTorch
t-vi/lightning-bolts
Toolbox of models, callbacks, and datasets for AI/ML researchers.
t-vi/lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, quantization, LoRA fine-tuning, pre-training. Apache 2.0-licensed.
t-vi/minGPT
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
t-vi/pytorch-lightning
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.
t-vi/bitsandbytes
8-bit CUDA functions for PyTorch
t-vi/ghstack
Submit stacked diffs to GitHub on the command line
t-vi/gptq
Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
t-vi/incubator-tvm-site
repo for tvm
t-vi/koalitionsvertrag2021
Coalition agreement between the SPD, Green Party, and FDP as clean PDF file, .docx file, and .txt file
t-vi/lightning-flash
Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning.
t-vi/litgpt
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT. Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
t-vi/nbsphinx
:ledger: Sphinx source parser for Jupyter notebooks
t-vi/nvfuser
A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")
t-vi/Pillow
The friendly PIL fork (Python Imaging Library)
t-vi/pytorch-image-models
PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, EfficientNetV2, NFNet, Vision Transformer, MixNet, MobileNet-V3/V2, RegNet, DPN, CSPNet, and more
t-vi/resource-stream
CUDA related news and material links
t-vi/Tensile
Stretching GPU performance for GEMMs and tensor contractions.
t-vi/transformers
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.
t-vi/tutorials
PyTorch tutorials.
t-vi/tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
t-vi/tvm-distro
Official autotvm log distro
t-vi/unsloth
5X faster 60% less memory QLoRA finetuning
t-vi/zulip
Zulip server and web app—powerful open source team chat