madhavatreplit's Stars
tiangolo/fastapi
FastAPI framework, high performance, easy to learn, fast to code, ready for production
jmorganca/ollama
Get up and running with Llama 2, Mistral, and other large language models locally.
sebastianruder/NLP-progress
Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.
VikParuchuri/marker
Convert PDF to markdown + JSON quickly with high accuracy
karpathy/llama2.c
Inference Llama 2 in one file of pure C
redis/redis-py
Redis Python client
openai/triton
Development repository for the Triton language and compiler
abetlen/llama-cpp-python
Python bindings for llama.cpp
highlight/highlight
highlight.io: The open source, full-stack monitoring platform. Error monitoring, session replay, logging, distributed tracing, and more.
litestar-org/litestar
Production-ready, Light, Flexible and Extensible ASGI API framework | Effortlessly Build Performant APIs
PrefectHQ/marvin
✨ Build AI interfaces that spark joy
mit-han-lab/llm-awq
[MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
aio-libs-abandoned/aioredis-py
asyncio (PEP 3156) Redis support
smallcloudai/refact
WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding
eschluntz/compress
Text compression for generating keyboard expansions
tysam-code/hlb-CIFAR10
Train to 94% on CIFAR-10 in <6.3 seconds on a single A100. Or ~95.79% in ~110 seconds (or less!)
bugen/pypipe
Python pipe command line tool
Ber666/llm-reasoners
A library for advanced large language model reasoning
executablebooks/markdown-it-py
Markdown parser, done right. 100% CommonMark support, extensions, syntax plugins & high speed. Now in Python!
codefuse-ai/MFTCoder
High Accuracy and efficiency multi-task fine-tuning framework for Code LLMs. This work has been accepted by KDD 2024.
ChenghaoMou/text-dedup
All-in-one text de-duplication
apoorvumang/prompt-lookup-decoding
huggingface/datablations
Scaling Data-Constrained Language Models
KernelTuner/kernel_tuner
Kernel Tuner
IBM/ModuleFormer
ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
nickrosh/evol-teacher
Open Source WizardCoder Dataset
jackyzha0/tabspace
✍️ A scratchspace for your new Tab page
void-main/FasterTransformer
Transformer related optimization, including BERT, GPT
JPTIZ/libgba-cpp
C++ Library for Game Boy Advance Development
guillaumeBellec/multitask