kangreen0210's Stars
kangreen0210/LIME
Accelerating the development of large multimodal models (LMMs) with lmms-eval
multimodal-art-projection/DailyPaper
KOR-Bench/KOR-Bench
Open-Source-O1/o1_Reasoning_Patterns_Study
Open-Source-O1/Open-O1
kangreen0210/LIME-rebuttal
Wusiwei0410/MMRA
multimodal-art-projection/MAP-NEO
TIGER-AI-Lab/Mantis
Official code for Paper "Mantis: Multi-Image Instruction Tuning" (TMLR2024)
01-ai/Yi
A series of large language models trained from scratch by developers @01-ai
hiyouga/LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
allenai/unified-io-2
nerdyrodent/VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
salesforce/BLIP
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
togethercomputer/RedPajama-Data
The RedPajama-Data repository contains code for preparing large datasets for training large language models.
TIGER-AI-Lab/UniIR
Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
liguodongiot/llm-action
本项目旨在分享大模型相关技术原理以及实战经验(大模型工程化、大模型应用落地)
google-research/bert
TensorFlow code and pre-trained models for BERT
tloen/alpaca-lora
Instruct-tune LLaMA on consumer hardware
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
tencent-ailab/tleague_projpage
Farama-Foundation/MicroRTS-Py
A simple and highly efficient RTS-game-inspired environment for reinforcement learning (formerly Gym-MicroRTS)
vwxyzjn/gym-microrts-paper
The source code for the gym-microrts paper.
wuhuikai/MSC
MSC: A Dataset for Macro-Management in StarCraft II
oxwhirl/smac
SMAC: The StarCraft Multi-Agent Challenge
ClausewitzCPU0/SC2AI
星际2 AI中文教程 StarCraft2 AI with python-sc2/pysc2 API