Pinned Repositories
LoRA-ViT
Low rank adaptation for Vision Transformer
DoctorGLM
基于ChatGLM-6B的中文问诊模型
Awesome-CLIP-in-Medical-Imaging
A Survey on CLIP in Medical Imaging
Awesome-Healthcare-Foundation-Models
ChatCAD
[COMMSENG'24, TMI'24] Interactive Computer-Aided Diagnosis using LLMs
DenseCLIP
DoctorGLM
基于ChatGLM-6B的中文问诊模型
JetsonMonitor
A simple Monitor based on Jetson nano 2GB
LLM_CMP
Source code of Evaluating Large Language Models for Radiology Natural Language Processing
McGIP
[AAAI‘24] Mining Gaze for Contrastive Learning toward Computer-assisted Diagnosis
zhaozh10's Repositories
zhaozh10/Awesome-CLIP-in-Medical-Imaging
A Survey on CLIP in Medical Imaging
zhaozh10/ChatCAD
[COMMSENG'24, TMI'24] Interactive Computer-Aided Diagnosis using LLMs
zhaozh10/McGIP
[AAAI‘24] Mining Gaze for Contrastive Learning toward Computer-assisted Diagnosis
zhaozh10/DenseCLIP
zhaozh10/LLM_CMP
Source code of Evaluating Large Language Models for Radiology Natural Language Processing
zhaozh10/DoctorGLM
基于ChatGLM-6B的中文问诊模型
zhaozh10/Awesome-Healthcare-Foundation-Models
zhaozh10/awesome-multimodal-in-medical-imaging
A collection of resources on applications of multi-modal learning in medical imaging.
zhaozh10/med-flamingo
zhaozh10/attmask
What to Hide from Your Students: Attention-Guided Masked Image Modeling
zhaozh10/continuum
A clean and simple data loading library for Continual Learning
zhaozh10/DeepMIM
zhaozh10/DropPos
[NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions
zhaozh10/github-readme-stats
:zap: Dynamically generated stats for your github readmes
zhaozh10/GroundingDINO
The official implementation of "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
zhaozh10/HPM
Hard Patches Mining for Masked Image Modeling
zhaozh10/LAVIS
LAVIS - A One-stop Library for Language-Vision Intelligence
zhaozh10/LocalMIM
Masked Image Modeling with Local Multi-Scale Reconstruction
zhaozh10/LoRA-ViT
Low rank adaptation for Vision Transformer
zhaozh10/mae
PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
zhaozh10/maskalign
[CVPR 2023] Official repository for paper "Stare at What You See: Masked Image Modeling without Reconstruction"
zhaozh10/MedFM
Official Repository of NeurIPS 2023 - MedFM Challenge
zhaozh10/mmdetection
OpenMMLab Detection Toolbox and Benchmark
zhaozh10/mmpretrain
OpenMMLab Pre-training Toolbox and Benchmark
zhaozh10/PMC-VQA
PMC-VQA is a large-scale medical visual question-answering dataset, which contains 227k VQA pairs of 149k images that cover various modalities or diseases.
zhaozh10/PointBLIP
PointBLIP: A Point Cloud Multi-modal model Embracing Diverse Data without Reliance on Image Domain
zhaozh10/SemAIM
Official Implementation of "Semantics-Consistent Feature Search for Self-Supervised Visual Representation Learning" in AAAI2024.
zhaozh10/SemMAE
SemMAE: Semantic-guided masking for learning masked autoencoders
zhaozh10/Tutorial-on-PhD-Application
Tutorial on PhD Application
zhaozh10/zhaozh10.github.io