yguo33's Stars
ollama/ollama
Get up and running with Llama 3.1, Mistral, Gemma 2, and other large language models.
coder/code-server
VS Code in the browser
JushBJJ/Mr.-Ranedeer-AI-Tutor
A GPT-4 AI Tutor Prompt for customizable personalized learning experiences.
meta-llama/llama3
The official Meta Llama 3 GitHub site
facefusion/facefusion
Industry leading face manipulation platform
ml-explore/mlx
MLX: An array framework for Apple silicon
unslothai/unsloth
Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
OpenBMB/MiniCPM-V
MiniCPM-V 2.6: A GPT-4V Level MLLM for Single Image, Multi Image and Video on Your Phone
PKU-YuanGroup/Open-Sora-Plan
This project aim to reproduce Sora (Open AI T2V model), we wish the open source community contribute to this project.
outlines-dev/outlines
Structured Text Generation
SJTU-IPADS/PowerInfer
High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
arcee-ai/mergekit
Tools for merging pretrained large language models.
OpenBMB/AgentVerse
🤖 AgentVerse 🪐 is designed to facilitate the deployment of multiple LLM-based agents in various applications, which primarily provides two frameworks: task-solving and simulation
dvlab-research/MGM
Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"
Alpha-VLLM/LLaMA2-Accessory
An Open-source Toolkit for LLM Development
togethercomputer/MoA
Together Mixture-Of-Agents (MoA) – 65.1% on AlpacaEval with OSS models
huggingface/blog
Public repo for HF blog posts
predibase/lorax
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
brevdev/notebooks
Collection of notebook guides created by the Brev.dev team!
SkyworkAI/Skywork
Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. We have open-sourced the model, training data, evaluation data, evaluation methods, etc. 天工系列模型在3.2TB高质量多语言和代码数据上进行预训练。我们开源了模型参数,训练数据,评估数据,评估方法。
allenai/open-instruct
open-compass/MixtralKit
A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI
princeton-nlp/SimPO
SimPO: Simple Preference Optimization with a Reference-Free Reward
HIT-SCIR/Chinese-Mixtral-8x7B
中文Mixtral-8x7B(Chinese-Mixtral-8x7B)
01-ai/Yi-1.5
Yi-1.5 is an upgraded version of Yi, delivering stronger performance in coding, math, reasoning, and instruction-following capability.
hkust-nlp/deita
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
hpcaitech/SwiftInfer
Efficient AI Inference & Serving
fanqiwan/FuseAI
FuseAI Project
neelsjain/NEFTune
Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning
swj0419/detect-pretrain-code-contamination