/XAI-Key-Point-Analysis

This is a teamwork done by two my teammates and me. The main content of this work is based on KPA shared task 2021. We first developed SNN to improve the work published in the shared task, then introduced explainable AI methods to explain them.

Primary LanguageJupyter Notebook

Key-Point-Analysis-and-Explanations-for-Quantitative-Text-Analysis

This work is done by two my teammates (Daniel Schroter and Hannes Schroter) and me from Oct 2021 to Feb 2022. Please email me if you would like to use some of the codes here.

The following files in the repository contain the final code of our work:

  1. FineTuningSentenceBert: Contains Unsupervised Finetuning TSDAE, SimCSE, CT
  2. SiameseNNContrastiveLossclean: Contains the development of our models
  3. Explainability_LeaveOneOutLIMEShap: Contains the LeaveOneOut, LIME and SHAP and their visualizations
  4. bertviz_visualization_of_BERT_internals: Contains a visualizations of the attentions layers in the transformer models

In order to get the code running, download the original data from the KPA Shared task 2021 and store all files (dev,train,test) in one data folder.

As another experiment, please go to Success_train_RoBERTa and Transformers_training_on_STS_or_Args30k_dataset_and_then_on_main_dataset.

  • Final Report: key-point-analysis-and-explanations-for-quantitative-text-analysis.pdf
  • Final Presentation: Final_presentation.pptx