dandelin's Stars
laurent22/joplin
Joplin - the secure note taking and to-do app with synchronisation capabilities for Windows, macOS, Linux, Android and iOS.
BerriAI/litellm
Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
karpathy/minbpe
Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.
leptonai/search_with_lepton
Building a quick conversation-based search demo with Lepton AI.
jzhang38/TinyLlama
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
OpenBMB/MiniCPM
MiniCPM-2B: An end-side LLM outperforming Llama2-13B.
FoundationVision/VAR
[GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction". An *ultra-simple, user-friendly yet state-of-the-art* codebase for autoregressive image generation!
OpenGVLab/Ask-Anything
[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
dvlab-research/MiniGemini
Official implementation for Mini-Gemini
PixArt-alpha/PixArt-alpha
PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis
InternLM/InternLM-XComposer
InternLM-XComposer2 is a groundbreaking vision-language large model (VLLM) excelling in free-form text-image composition and comprehension.
facebookresearch/schedule_free
Schedule-Free Optimization in PyTorch
lifan0127/ai-research-assistant
Aria is Your AI Research Assistant Powered by GPT Large Language Models
epfml/landmark-attention
Landmark Attention: Random-Access Infinite Context Length for Transformers
beichenzbc/Long-CLIP
hanatos/vkdt
raw photography workflow that sucks less
Meituan-AutoML/VisionLLaMA
VisionLLaMA: A Unified LLaMA Interface for Vision Tasks
gstoica27/ZipIt
A framework for merging models solving different tasks with different initializations into one multi-task model without any additional training
HazyResearch/aisys-building-blocks
Building blocks for foundation models.
TRI-ML/prismatic-vlms
A flexible and efficient codebase for training visually-conditioned language models (VLMs)
runpod/runpodctl
🧰 | RunPod CLI for pod management
apple/ml-veclip
The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"
Unispac/Visual-Adversarial-Examples-Jailbreak-Large-Language-Models
Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models
bit-trade-one/ADUSBCIM-USBCableChecker2
ADUSBCIM
UCSC-VLAA/vllm-safety-benchmark
Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"
pfnet-research/hyperbolic_wrapped_distribution
naver-ai/chacha-chatbot
korawat-tanwisuth/POUF
JegZheng/CT-pytorch
Repository for conditional transport
Netflix/clove