[2024.7.14] Our AI Platform MedPodGPT is publicly available. It is an online platform for deploying our latest multimodal foundation models for medical and clinical applications. Please try it out if you are interested!
[2024.7.12] Our preprint is available online! Please check it!
[2024.7.12] We are releasing a new benchmark encompassing the latest USMLE Step 1, Step 2, Step 3, and Ethics to further advance the filed. Check our database here.
[2024.7.11] We open-sourced the source codes of our MedPodGPT: medical LLMs in your pocket and benchmarking multilingual medical LLMs.
- Installation
- Quick Start
- Performance Evaluation
- Dataset Description
- Benchmarks and Results
- Real-world Deployment
- Automatic Speech Recognition
- Dataset Builder
- Upload and Download Models
- Structure of the Code
- Citation
- Contact
- Contribution
- Acknowledgement
pip install -r requirements.txt
For lightweight models (2B, 7B, and 8B), we optimize the entire model. Please check and setup hyper-parameters in config_small.yml.
python main_small.py
For lager and heavy models (>8B), we optimize the Low-rank Adapter (LoRA). Please check and setup hyper-parameters in config_large.yml.
python main_large.py
We also provide support for quantizing larger models, e.g., LLaMA 3 70B model, using the GPTQ algorithm and then optimizing the LoRA. The large models can be deployed on consumer GPUs after quantization.
We can directly use the Hugging Face transformers package to conduct quantization.
python quantization_HF.py --repo "meta-llama/Meta-Llama-3-70B-Instruct" --bits 4 --group_size 128
Alternatively, we also provide a quantization script by using the Python AutoGPTQ package.
python quantization.py "meta-llama/Meta-Llama-3-70B-Instruct" "./gptq_model" "medical" --bits 4 --group_size 128 --desc_act 1 --dtype float16 --seqlen 2048 --damp 0.01
Then, we need to upload the model to Hugging Face,
python upload_quantized_model.py --repo "shuyuej/MedLLaMA3-70B-BASE-MODEL-QUANT" --folder_path "./gptq_model"
Lastly, we optimize the LoRA module,
python main_quantization.py
All inferences are conducted using the vLLM engine. We use inference_pretrain.py and inference_single_model.py for larger models (>8B) and inference_sequential.py for smaller models (2B/7B/8B). Please check here for more information.
Note
Mistral 7B on Hindi MMLU Benchmarks:
Please un-comment this line.
To address the issue of repeated content in some responses, we applied a repetition_penalty during inference.
We simply use Directly answer the best option:
instead of Answer:
to better guide LLMs to generate the best option
and to easier extract the best option from the responses.
Please modify these lines
if you wanna try other prompts.
Note
LLaMA 3 8B on Hindi MMLU Benchmarks:
Please modify these lines.
Because most responses are in mixed English-Hindi or English, we used เคเฅเคชเคฏเคพ เคชเฅเคฐเคถเฅเคจ เคเคพ เคเคคเฅเคคเคฐ เคนเคฟเคเคฆเฅ เคฎเฅเค เคฆเฅเค เคเคฐ เคธเฅเคงเฅ เคธเคฌเคธเฅ เค
เคเฅเคเฅ เคตเคฟเคเคฒเฅเคช เคเฅ เคธเคพเคฅ เคเคตเคพเคฌ เคฆเฅเค:
(Please answer the question in Hindi and directly answer the best option:) to guide the model.
english_prompt = "Directly answer the best option:"
english_prompt_pubmedqa = "Directly answer yes/no/maybe:"
hindi_prompt = "เคธเฅเคงเฅ เคธเคฌเคธเฅ เค
เคเฅเคเฅ เคตเคฟเคเคฒเฅเคช เคเฅ เคธเคพเคฅ เคเคตเคพเคฌ เคฆเฅเค:"
french_prompt = "Rรฉpondez directement avec la meilleure option:"
spanish_prompt = "Responde directamente con la mejor opciรณn:"
chinese_prompt = "็ดๆฅๅ็ญๆไผ้้กน:"
Important
Please note that if you wanna conduct model inference using multiple GPUs, the GPUs' memory cannot be successfully released.
Please modify these lines
and make use of this sh
file.
Sequentially evaluate the performance of multiple checkpoints (models).
Please note that we use --eval_pretrain
to indicate whether to evaluate the original pre-trained model.
python inference_sequential.py --eval_pretrain True --id 35166 52749 70332 87915
Sequentially evaluate the performance of the original pre-trained model and all the checkpoints.
Special Notice: Please change the checkpoint IDs
and CUDA_VISIBLE_DEVICES
in the inference_large.sh file.
sh inference_large.sh
Only evaluate the performance of the original pre-trained model.
python inference_pretrain.py
Only evaluate the performance of a single checkpoint (model).
Please note that --id
is the checkpoint id.
python inference_single_model.py --id 35166
We also offer support for running OpenAI ChatGPT inference using API. Please enter your OpenAI API Key here.
Warning
Please note that OpenAI ChatGPT API is extremely expensive.
Please only use it if you have a budget for it!
python inference_chatgpt.py
For now, we released a demo dataset for you to run the codes. Please follow our instructions to transcribe your own podcasts and build your own dataset.
The podcasts data used for the continual pre-training of MedPodGPT:
We utilized a comprehensive set of medical benchmarks from the most widely spoken languages in the world, including English, Mandarin, French, Spanish, and Hindi.
Language | Dataset | # test examples | # of choices | Link | Ref |
---|---|---|---|---|---|
English | MedExpQA | 125 | 5 | Link | Paper |
MedQA | 1273 | 4 | Link | Paper | |
MedMCQA | 4183 | 4 | Link | Paper | |
PubMedQA | 1000 | 3 | Link | Paper | |
MMLU - Anatomy | 135 | 4 | Link | Paper | |
MMLU - Clinical Knowledge | 265 | 4 | Link | Paper | |
MMLU - College Biology | 144 | 4 | Link | Paper | |
MMLU - College Medicine | 173 | 4 | Link | Paper | |
MMLU - Medical Genetics | 100 | 4 | Link | Paper | |
MMLU - Professional Medicine | 272 | 4 | Link | Paper | |
French | MedExpQA | 125 | 5 | Link | Paper |
MedMCQA | 622 | 5 | Link | Paper | |
MMLU - Anatomy | 135 | 4 | Link | Paper | |
MMLU - Clinical Knowledge | 265 | 4 | Link | Paper | |
MMLU - College Biology | 144 | 4 | Link | Paper | |
MMLU - College Medicine | 173 | 4 | Link | Paper | |
MMLU - Medical Genetics | 100 | 4 | Link | Paper | |
MMLU - Professional Medicine | 272 | 4 | Link | Paper | |
Spanish | HEAD-QA | 2742 | 4 | Link | Paper |
MedExpQA | 125 | 5 | Link | Paper | |
MMLU - Anatomy | 135 | 4 | Link | Paper | |
MMLU - Clinical Knowledge | 265 | 4 | Link | Paper | |
MMLU - College Biology | 144 | 4 | Link | Paper | |
MMLU - College Medicine | 173 | 4 | Link | Paper | |
MMLU - Medical Genetics | 100 | 4 | Link | Paper | |
MMLU - Professional Medicine | 272 | 4 | Link | Paper | |
Chinese | MedQA-MCMLE | 3426 | 4 | Link | Paper |
CMMLU - Anatomy | 148 | 4 | Link | Paper | |
CMMLU - Clinical Knowledge | 237 | 4 | Link | Paper | |
CMMLU - College Medicine | 273 | 4 | Link | Paper | |
CMMLU - Medical Genetics | 176 | 4 | Link | Paper | |
CMMLU - Traditional Chinese Medicine | 185 | 4 | Link | Paper | |
CMMLU - Virology | 169 | 4 | Link | Paper | |
Hindi | MMLU - Anatomy | 135 | 4 | Link | Paper |
MMLU - Clinical Knowledge | 265 | 4 | Link | Paper | |
MMLU - College Biology | 144 | 4 | Link | Paper | |
MMLU - College Medicine | 173 | 4 | Link | Paper | |
MMLU - Medical Genetics | 100 | 4 | Link | Paper | |
MMLU - Professional Medicine | 272 | 4 | Link | Paper |
For real-world deployment, please refer to the vLLM Distributed Inference and Serving and OpenAI Compatible Server.
In the scripts folder, we provide Automatic Speech Recognition (ASR) service.
python audio2text.py
We used the following codes to pre-process our transcripts and generate training dataset. Please check these lines for different languages support.
python database_builder.py
python merge_database.py
In the scripts folder, we offer support for both uploading and downloading models.
To upload your checkpoints to Hugging Face model repo,
python upload_model.py --repo "shuyuej/DrGemma2B" --id 35166 52749 70332 87915
To download your model or files from Hugging Face repo,
python download_model.py --repo "shuyuej/DrGemma2B" --repo_type "model" --save_dir "./save_folder"
At the root of the project, you will see:
โโโ requirements.txt
โโโ main_small.py
โโโ main_large.py
โโโ main_quantization.py
โโโ config_small.yml
โโโ config_large.yml
โโโ config_quantization.yml
โโโ config_chatgpt.yml
โโโ lib
โ โโโ data_manager.py
โ โโโ model_loader_small.py
โ โโโ model_loader_large.py
โ โโโ model_loader_quantization.py
โ โโโ evaluation_small.py
โ โโโ evaluation_large.py
โ โโโ evaluation_chatgpt.py
โโโ inference
โ โโโ inference_large.sh
โ โโโ inference_chatgpt.py
โ โโโ inference_pretrain.py
โ โโโ inference_sequential.py
โ โโโ inference_single_model.py
โโโ download_files
โ โโโ download_model_from_hf.py
โ โโโ download_model_to_local.py
โโโ quantization
โ โโโ quantization.py
โ โโโ upload_quantized_model.py
โโโ scripts
โ โโโ audio2text.py
โ โโโ download_model.py
โ โโโ upload_model.py
โ โโโ database_builder.py
โ โโโ merge_database.py
โโโ benchmark
โ โโโ chinese_cmmlu
โ โโโ chinese_mcmle
โ โโโ english_medexpqa
โ โโโ english_medmcqa
โ โโโ english_medqa
โ โโโ english_mmlu
โ โโโ english_pubmedqa
โ โโโ english_usmle
โ โโโ french_medexpqa
โ โโโ french_medmcqa
โ โโโ french_mmlu
โ โโโ hindi_mmlu
โ โโโ spanish_headqa
โ โโโ spanish_medexpqa
โ โโโ spanish_mmlu
โโโ utils
โโโ answer_utils.py
โโโ benchmark_utils.py
โโโ eval_chatgpt_utils.py
โโโ eval_large_utils.py
โโโ eval_small_utils.py
โโโ test_extraction_chinese.py
โโโ test_extraction_english.py
โโโ test_extraction_french.py
โโโ test_extraction_hindi.py
โโโ test_extraction_spanish.py
โโโ utils.py
If you find our work useful in your research, please consider citing it in your publications. We provide a BibTeX entry below.
@article {Jia2024medpodgpt,
author = {Jia, Shuyue and Bit, Subhrangshu and Searls, Edward and Claus, Lindsey and Fan, Pengrui and Jasodanand, Varuna H. and Lauber, Meagan V. and Veerapaneni, Divya and Wang, William M. and Au, Rhoda and Kolachalama, Vijaya B},
title = {{MedPodGPT}: A multilingual audio-augmented large language model for medical research and education},
elocation-id = {2024.07.11.24310304},
year = {2024},
doi = {10.1101/2024.07.11.24310304},
publisher = {Cold Spring Harbor Laboratory Press},
abstract = {The proliferation of medical podcasts has generated an extensive repository of audio content, rich in specialized terminology, diverse medical topics, and expert dialogues. Here we introduce a computational framework designed to enhance large language models (LLMs) by leveraging the informational content of publicly accessible medical podcast data. This dataset, comprising over 4,300 hours of audio content, was transcribed to generate over 39 million text tokens. Our model, MedPodGPT, integrates the varied dialogue found in medical podcasts to improve understanding of natural language nuances, cultural contexts, and medical knowledge. Evaluated across multiple benchmarks, MedPodGPT demonstrated an average improvement of 2.31\% over standard open-source benchmarks and showcased an improvement of 2.58\% in its zero-shot multilingual transfer ability, effectively generalizing to different linguistic contexts. By harnessing the untapped potential of podcast content, MedPodGPT advances natural language processing, offering enhanced capabilities for various applications in medical research and education.Competing Interest StatementV.B.K. is on the scientific advisory board for Altoida Inc. and serves as a consultant to AstraZeneca. R.A. is a scientific advisor to Signant Health and NovoNordisk. The remaining authors declare no competing interests.Funding StatementNational Institutes of HealthAuthor DeclarationsI confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.YesI confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.YesI understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).Yes I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.YesAll data produced are available online at https://github.com/vkola-lab/MedPodGPT.https://github.com/vkola-lab/MedPodGPT},
URL = {https://www.medrxiv.org/content/early/2024/07/12/2024.07.11.24310304},
eprint = {https://www.medrxiv.org/content/early/2024/07/12/2024.07.11.24310304.full.pdf},
journal = {medRxiv}
}
Core Contributor and Maintainer:
Database Contributor and Maintainer:
If you have any questions, please drop us an email at brucejia@bu.edu, sbit@bu.edu, and nsearls@bu.edu.
We always welcome contributions to help make MedPodGPT Library better. If you would like to contribute, please submit a pull request.
The MedPodGPT Library is created and maintained by the Kolachalama Laboratory.