katanallama's Stars
tecosaur/org-pandoc-import
Mirror of https://git.tecosaur.net/tec/org-pandoc-import
alekseysidorov/nixpkgs-cross-overlay
tvlfyi/tvix
Tvix - A Rust implementation of Nix. Read-only mirror of https://cs.tvl.fyi/depot/-/tree/tvix
willbush/system
Trying to build the perfect system
stm32-rs/stm32-rs
Embedded Rust device crates for STM32 microcontrollers
karpathy/llama2.c
Inference Llama 2 in one file of pure C
DeterminateSystems/magic-nix-cache-action
Save 30-50%+ of CI time without any effort or cost. Use Magic Nix Cache, a totally free and zero-configuration binary cache for Nix on GitHub Actions.
numtide/nix-gl-host
Run OpenGL/Cuda programs built with Nix, on all Linux distributions.
nix-community/nixGL
A wrapper tool for nix OpenGL application [maintainer=@guibou]
tuhhosg/reupnix
Reconfigurable and Updateable Embedded Systems
artidoro/qlora
QLoRA: Efficient Finetuning of Quantized LLMs
zylon-ai/private-gpt
Interact with your documents using the power of GPT, 100% privately, no data leaks
lucidrains/MEGABYTE-pytorch
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
qwopqwop200/GPTQ-for-LLaMa
4 bits quantization of LLaMA using GPTQ
UX-Decoder/Segment-Everything-Everywhere-All-At-Once
[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
ggerganov/ggml
Tensor library for machine learning
yoheinakajima/babyagi
microsoft/LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
huggingface/trl
Train transformer language models with reinforcement learning.
nixified-ai/flake
A Nix flake for many AI projects
tysam-code/hlb-gpt
Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wikitext-103 on a single A100 in <100 seconds. Scales to larger models with one parameter change (feature currently in alpha).
michaelgutmann/ml-pen-and-paper-exercises
Pen and paper exercises in machine learning
IST-DASLab/sparsegpt
Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".
openai/chatgpt-retrieval-plugin
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
ggerganov/llama.cpp
LLM inference in C/C++
tloen/alpaca-lora
Instruct-tune LLaMA on consumer hardware
antimatter15/alpaca.cpp
Locally run an Instruction-Tuned Chat-Style LLM
bitsandbytes-foundation/bitsandbytes
Accessible large language models via k-bit quantization for PyTorch.
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
Dao-AILab/flash-attention
Fast and memory-efficient exact attention