LangWY's Stars
synlp/R2-LLM
The official GitHub repository of the AAAI-2024 paper "Bootstrapping Large Language Models for Radiology Report Generation".
ttanida/rgrg
Code for the CVPR paper "Interactive and Explainable Region-guided Radiology Report Generation"
WangRongsheng/XrayGLM
🩺 首个会看胸部X光片的中文多模态医学大模型 | The first Chinese Medical Multimodal Model that Chest Radiographs Summarization.
cambridgeltl/visual-med-alpaca
Visual Med-Alpaca is an open-source, multi-modal foundation model designed specifically for the biomedical domain, built on the LLaMa-7B.
mlii0117/DCL
Official code for "Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report Generation" (CVPR 2023)
jbdel/vilmedic
ViLMedic (Vision-and-Language medical research) is a modular framework for vision and language multimodal research in the medical field
WissingChen/VLCI
Visual-Linguistic Causal Intervention for Radiology Report Generation
LX-doctorAI1/DeltaNet
synlp/R2GenRL
The code for our ACL-2022 paper titled "Reinforced Cross-modal Alignment for Radiology Report Generation"
cuhksz-nlp/R2Gen
cuhksz-nlp/R2GenCMN
Markin-Wang/XProNet
[ECCV2022] The official implementation of Cross-modal Prototype Driven Network for Radiology Report Generation