jay2012-lin's Stars
jiahaoli57/Call-for-Reviewers
This project aims to collect the latest "call for reviewers" links from various top CS/ML/AI conferences/journals
stanleylsx/llms_tool
一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。
shibing624/MedicalGPT
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO。
lonePatient/awesome-pretrained-chinese-nlp-models
Awesome Pretrained Chinese NLP Models,高质量中文预训练模型&大模型&多模态模型&大语言模型集合
aceliuchanghong/FAQ_Of_LLM_Interview
大模型算法岗面试题(含答案):常见问题和概念解析 "大模型面试题"、"算法岗面试"、"面试常见问题"、"大模型算法面试"、"大模型应用基础"
nju-websoft/DIFT
Finetuning Generative Large Language Models with Discrimination Instructions for Knowledge Graph Completion, ISWC 2024
xiaoman-zhang/KAD
DeepReasoning/aihealth
YangLing0818/VQGraph
[ICLR 2024] VQGraph: Rethinking Graph Representation Space for Bridging GNNs and MLPs
OpenRL-Lab/Wandb_Tutorial
How to use wandb?
nerdyrodent/VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
FoundationVision/LlamaGen
Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation
dome272/VQGAN-pytorch
Pytorch implementation of VQGAN (Taming Transformers for High-Resolution Image Synthesis) (https://arxiv.org/pdf/2012.09841.pdf)
taokz/BiomedGPT
BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks
OFA-Sys/OFA
Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
naklecha/llama3-from-scratch
llama3 implementation one matrix multiplication at a time
allenai/open-instruct
yuanzhoulvpi2017/zero_nlp
中文nlp解决方案(大模型、数据、模型、训练、推理)
hyn2028/llm-cxr
Official code for "LLM-CXR: Instruction-Finetuned LLM for CXR Image Understanding and Generation"
lucidrains/autoregressive-diffusion-pytorch
Implementation of Autoregressive Diffusion in Pytorch
OpenGVLab/Multi-Modality-Arena
Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!
xufangzhi/ENVISIONS
A Neural-Symbolic Self-Training Framework
ljwztc/CLIP-Driven-Universal-Model
[ICCV 2023] CLIP-Driven Universal Model; Rank first in MSD Competition.
hiyouga/LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
OpenBMB/MiniCPM-V
MiniCPM-V 2.6: A GPT-4V Level MLLM for Single Image, Multi Image and Video on Your Phone
yeerwen/MedCoSS
CVPR 2024 (Highlight)
BradyFU/Awesome-Multimodal-Large-Language-Models
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
LlamaFamily/Llama-Chinese
Llama中文社区,Llama3在线体验和微调模型已开放,实时汇总最新Llama3学习资料,已将所有代码更新适配Llama3,构建最好的中文Llama大模型,完全开源可商用
richard-peng-xia/awesome-multimodal-in-medical-imaging
A collection of resources on applications of multi-modal learning in medical imaging.
mlfoundations/open_clip
An open source implementation of CLIP.