Pinned Repositories
alignment-handbook
Robust recipes to align language models with human and AI preferences
AlpaCare
axolotl
Go ahead and axolotl questions
babyagi
deeplearning-nlp-models
A small, interpretable codebase containing the re-implementation of a few "deep" NLP models in PyTorch. Colab notebooks to run with GPUs. Models: word2vec, CNNs, transformer, gpt.
microstructure-plotter
visualize financial microstructure 📈 & debug trading bots 🤖
mistral_7b_lora_example
A simple example illustrating how to fine-tune Mistal7b via (q)LoRA
tldr-transformers
The "tl;dr" on a few notable transformer papers (pre-2022).
will-thompson-k's Repositories
will-thompson-k/mistral_7b_lora_example
A simple example illustrating how to fine-tune Mistal7b via (q)LoRA
will-thompson-k/alignment-handbook
Robust recipes to align language models with human and AI preferences
will-thompson-k/AlpaCare
will-thompson-k/axolotl
Go ahead and axolotl questions
will-thompson-k/chain-of-verification
This repository implements the chain of verification paper by Meta AI
will-thompson-k/deepchem
Democratizing Deep-Learning for Drug Discovery, Quantum Chemistry, Materials Science and Biology
will-thompson-k/DNA-Diffusion
🧬 Understanding the code of life: Generative modeling of regulatory DNA sequences with diffusion probabilistic models 💨
will-thompson-k/flash-attention
Fast and memory-efficient exact attention
will-thompson-k/generative_agents
Generative Agents: Interactive Simulacra of Human Behavior
will-thompson-k/jax-triton
jax-triton contains integrations between JAX and OpenAI Triton
will-thompson-k/langchain
⚡ Building applications with LLMs through composability ⚡
will-thompson-k/cv
Print-friendly, minimalist CV page
will-thompson-k/dont_know_jax
learning jax
will-thompson-k/langgraph
Build resilient language agents as graphs.
will-thompson-k/lit-gpt
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT. Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
will-thompson-k/LLM-Benchmark-Logs
Just a bunch of benchmark logs for different LLMs
will-thompson-k/llm-swarm
Manage scalable open LLM inference endpoints in Slurm clusters
will-thompson-k/llm_steer
Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vectors
will-thompson-k/LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
will-thompson-k/micrograd
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
will-thompson-k/mistral-src
Reference implementation of Mistral AI 7B v0.1 model.
will-thompson-k/nano-llama31
nanoGPT style version of Llama 3.1
will-thompson-k/open_spiel
OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.
will-thompson-k/paxml
Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentation and parallelization, and has demonstrated industry leading model flop utilization rates.
will-thompson-k/torchtitan
A native PyTorch Library for large model training
will-thompson-k/transformer-debugger
will-thompson-k/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
will-thompson-k/vocode-python
🤖 Build voice-based LLM agents. Modular + open source.
will-thompson-k/weak-to-strong
will-thompson-k/xformers
Hackable and optimized Transformers building blocks, supporting a composable construction.