Pinned Repositories
ancient-language-models
CoCo-Ex
CoCo-Ex extracts meaningful concepts from natural language texts and maps them to conjunct concept nodes in ConceptNet, utilizing the maximum of relational information stored in the ConceptNet knowledge graph.
COINS
The corresponding code from our paper " COINS: Dynamically Generating COntextualized Inference Rules for Narrative Story Completion (ACL 2021)". Do not hesitate to open an issue if you run into any trouble!
discourse-aware-semantic-self-attention
Repository for code and data from the EMNLP-IJCNLP 2019 paper "Discourse-aware Semantic Self-Attention for Narrative Reading Comprehension"
GraphLanguageModels
Code for our paper "Graph Language Models"
LMs4Implicit-Knowledge-Generation
Code for equipping pretrained language models (BART, GPT-2, XLNet) with commonsense knowledge for generating implicit knowledge statements between two sentences, by (i) finetuning the models on corpora enriched with implicit information; and by (ii) constraining models with key concepts and commonsense knowledge paths connecting them.
MM-SHAP
This is the official implementation of the paper "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks"
SRL-S2S
Encoder-Decoder model for Semantic Role Labeling
VALSE
Data repository for the VALSE benchmark.
xsrl_mbert_aligner
X-SRL Dataset. Including the code for the SRL annotation projection tool and an out-of-the-box word alignment tool based on Multilingual BERT embeddings.
Heidelberg-NLP's Repositories
Heidelberg-NLP/GraphLanguageModels
Code for our paper "Graph Language Models"
Heidelberg-NLP/CoCo-Ex
CoCo-Ex extracts meaningful concepts from natural language texts and maps them to conjunct concept nodes in ConceptNet, utilizing the maximum of relational information stored in the ConceptNet knowledge graph.
Heidelberg-NLP/VALSE
Data repository for the VALSE benchmark.
Heidelberg-NLP/ancient-language-models
Heidelberg-NLP/MM-SHAP
This is the official implementation of the paper "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks"
Heidelberg-NLP/COINS
The corresponding code from our paper " COINS: Dynamically Generating COntextualized Inference Rules for Narrative Story Completion (ACL 2021)". Do not hesitate to open an issue if you run into any trouble!
Heidelberg-NLP/discourse-aware-semantic-self-attention
Repository for code and data from the EMNLP-IJCNLP 2019 paper "Discourse-aware Semantic Self-Attention for Narrative Reading Comprehension"
Heidelberg-NLP/LMs4Implicit-Knowledge-Generation
Code for equipping pretrained language models (BART, GPT-2, XLNet) with commonsense knowledge for generating implicit knowledge statements between two sentences, by (i) finetuning the models on corpora enriched with implicit information; and by (ii) constraining models with key concepts and commonsense knowledge paths connecting them.
Heidelberg-NLP/SRL-S2S
Encoder-Decoder model for Semantic Role Labeling
Heidelberg-NLP/xsrl_mbert_aligner
X-SRL Dataset. Including the code for the SRL annotation projection tool and an out-of-the-box word alignment tool based on Multilingual BERT embeddings.
Heidelberg-NLP/CC-SHAP
Code for "On Measuring Faithfulness of Natural Language Explanations"
Heidelberg-NLP/CCKG
Repository to create CCKGs from the paper "Similarity-weighted Construction of Contextualized Commonsense Knowledge Graphs for Knowledge-intense Argumentation Tasks"
Heidelberg-NLP/MHKA
The corresponding code from our paper "Social Commonsense Reasoning with Multi-Head Knowledge Attention (EMNLP 2020)". Do not hesitate to open an issue if you run into any trouble!
Heidelberg-NLP/amr-metric-suite
This project collects methods that enhance the comparison between AMR graphs.
Heidelberg-NLP/CO-NNECT
This repository contains our path generation framework Co-NNECT, in which we combine two models for establishing knowledge relations and paths between concepts from sentences, as a form of explicitation of implicit knowledge: COREC-LM (COmmonsense knowledge RElation Classification using Language Models), a relation classification system that we developed for classifying commonsense knowledge relations; and COMET, a target prediction system developed by Bosselut et al., 2019.
Heidelberg-NLP/AMRParseEval
Code and data for the paper *Better Smatch = Better Parser? AMR evaluation is not so simple anymore*
Heidelberg-NLP/counting-probe
Counting dataset for Vision & Language models. Introduced in the paper "Seeing Past Words: Testing the Cross-Modal Capabilities of Pretrained V&L Models". https://arxiv.org/abs/2012.12352
Heidelberg-NLP/CC-SHAP-VLM
Official code implementation for the paper "Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations?"
Heidelberg-NLP/amr-argument-sim
Heidelberg-NLP/simple-xamr
Strong X-AMR baseline with NMT and parser pipeline.
Heidelberg-NLP/C2Gen
Heidelberg-NLP/HYPEVENTS
The corresponding code from our paper " Generating Hypothetical Events for Abductive Inference (StarSem 2021)". Do not hesitate to open an issue if you run into any trouble!
Heidelberg-NLP/MFscore
Heidelberg-NLP/CGGC
Heidelberg-NLP/Multi-Hop-Knowledge-Paths-Human-Needs
Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs
Heidelberg-NLP/renji_abarai
This repository shows the code and data for our submission to the "Argument Retrieval for Controversial Questions" task at Touché 2023.
Heidelberg-NLP/IKAT-DE
German version of IKAT: A corpus consisting of high-quality human annotations of missing and implied information in argumentative texts. The data is further annotated with semantic clause types and commonsense knowledge relations.
Heidelberg-NLP/IKAT-EN
English version of IKAT: A corpus consisting of high-quality human annotations of missing and implied information in argumentative texts. The data is further annotated with semantic clause types and commonsense knowledge relations.
Heidelberg-NLP/NLG-CHECKLIST
Heidelberg-NLP/weisfeiler-leman-bamboo