/master_thesis

Primary LanguageJupyter Notebook

Timeline of the thesis:

  1. We tested the ability of Llama 2 on 0-shot text generation for our task. For comparison we also tested with another LLM - T5 (question_generation_no_finetune)
  2. Finetuned Llama 2 on the dataset and try again (question_generation_finetune)
  3. Test + finetune Llama 2 on trigger detection task (triggers_detection_finetune)
  4. Prepared the auxiliary database (auxiliary_data_prep)
  5. Processed the auxiliary data and the ehr (ehr_processing_and_final_question_generation)