Pinned Repositories
fastai-Experiments-and-tips
My experiments and progress within various fastai applications
fastai2-Tabular-Baselines
A few baselines with a standard tabular model
fastinference
A collection of inference modules for fastai2
minimal-trainer-zoo
Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines
nbagile
Making nbdev compatible for agile frameworks and developments
nbquarto
Small python library solely for quick Quarto extensions
Practical-Deep-Learning-For-Coders
Material for my run of Fast.AI
Practical-Deep-Learning-for-Coders-2.0
Notebooks for the "A walk with fastai2" Study Group and Lecture Series
presentations
My research posters I have taken to conferences
Walk-with-fastai-revisited
Source notebook code for the course, stripped of all information. Please consider puchasing the course at https://store.walkwithfastai.com
muellerzr's Repositories
muellerzr/minimal-trainer-zoo
Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines
muellerzr/nbdistributed
Seemless interface of using PyTOrch distributed with Jupyter notebooks
muellerzr/import-timer
Pragmatic approach to parsing import profiles for CI's
muellerzr/presentations
My research posters I have taken to conferences
muellerzr/swift-weights
Speeding up the loading of ML models by efficiently splitting the weights across multiple disks then reading them in parallel during model initialization
muellerzr/llama-3-8b-self-align
StarCoder2-Instruct: Fully Transparent and Permissive Self-Alignment for Code Generation applied to llama 3 8b
muellerzr/gated-discord-bot
Creating a gated course discord for Maven courses
muellerzr/fastai-2019-pt2-notes
Notes as I speed-run p2 from 2019 out of fastai
muellerzr/MS-AMP
Microsoft Automatic Mixed Precision Library
muellerzr/timesheet-writer
API which updates my google sheet to log my time from a CLI
muellerzr/fine-tuning-llms
muellerzr/hf-model-downloader
Easy way to download model backends from HF
muellerzr/llm-cmd-comp-qwen
Shell completion using LLM for Qwen models
muellerzr/torchtune
A Native-PyTorch Library for LLM Fine-tuning
muellerzr/trl
Train transformer language models with reinforcement learning.
muellerzr/accelerate
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
muellerzr/bitsandbytes
Accessible large language models via k-bit quantization for PyTorch.
muellerzr/chat-ui
Open source codebase powering the HuggingChat app
muellerzr/d2e
All of the JSON data files and card images that the D2eCV utilizes for Descent 2nd Edition
muellerzr/ddp-annotated-notes
muellerzr/deepseek-coder-self-instruct
muellerzr/hf-raid
Loading safetensor weights off thunderbolt RAID at home
muellerzr/llm_context_benchmarks
muellerzr/muellerzr.github.io
Blog
muellerzr/nano-accelerate
muellerzr/nvidia-speed-benchmarks
Benchmarks of my GPUs using Stas's benchmark script
muellerzr/PiPPy
Pipeline Parallelism for PyTorch
muellerzr/qwen3-cli
Trying to make SmolQwen better at CLI translation
muellerzr/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
muellerzr/whisper-transcribe
Fork of Whisper Writer, but minimal