************************************************************************************************************************************************************************************************************************************************** README ************************************************************************************************************************************************************************************************************************************************** Repository for the paper "Toward Inclusive Online Environments: Counterfactual-Inspired XAI for Detecting and Interpreting Hateful and Offensive Tweets" accepted for publication at the 1st World Conference on Explainable Artificial Intelligence 2023, Lisbon, Portugal. 1. The data used in the paper to obtain the results is also uploaded with due credit to the "Hate Speech and Offensive Language Dataset". 2. The trained models for obtaining evaluated results as reported in the paper are also present in the "trained_models" folder. 3. The code for data analysis is uploaded in the "Data Analysis and Exploration " subfolder in the "src" folder. 4. The code used to train models has been uploaded in the "model_training" subfolder in the "src" folder. 5. The "src" folder also contains the "results and explainabilty" subfolder. All the results reported in the data can be reproduced from here. This can be done using the pretained models already uploaded in the repository (see 2). 6. The directory addresses might need to be corrected in accordance to the user to correctly run the code. 7. The AutoML notebook requires a high runtime.
DeedahwarMazhar/XAI-Counterfactual-Hate-Speech
Repository for the paper "Toward Inclusive Online Environments: Counterfactual-Inspired XAI for Detecting and Interpreting Hateful and Offensive Tweets" submitted to the 1st World Conference on Explainable Artificial Intelligence 2023
Jupyter Notebook