/SemEval-Triplet-data

Aspect Sentiment Triplet Extraction (ASTE) dataset in AAAI 2020 and EMNLP 2020.

Aspect Sentiment Triplet Extraction Task

[ACL 2021] Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction (In ACL 2021)

  • Please refer to this site for source code.

[EMNLP 2020] Position-Aware Tagging for Aspect Sentiment Triplet Extraction (In EMNLP 2020)

  • Please refer to this site for source code.

[AAAI 2020] Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment Analysis (In AAAI 2020)

Task Description

Aspect Sentiment Triplet Extraction (ASTE) is the task of extracting the triplets of target entities, their associated sentiment, and opinion spans explaining the reason for the sentiment. This task is firstly proposed by (Peng et al., 2020) in the paper publised in AAAI 2020, Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment Analysis (In AAAI 2020)

For Example:

Given the sentence:

The screen is very large and crystal clear with amazing colors and resolution .

The objective of the Aspect Sentiment Triplet Extraction (ASTE) task is to predict the triplets:

[('screen', 'large', 'Positive'), ('screen', 'clear', 'Positive'), ('colors', 'amazing', 'Positive'), ('resolution', 'amazing', 'Positive')]

where a triplet consists of (target, opinion, sentiment).

Data Description

The files in the ASTE-Data-V2-EMNLP2020 folder are the refined data. We remove triplets that have conflicting sentiments (by SemEval) in both training, validation and test sets and also append the gold triplets at the end of each sentence to ease the triplet evaluation. We also remove the tagged sentences from the the previous ASTE-Data-V1 data released in AAAI-2020, as the tagging format results in incomplete aspect sentiment triplets.

The data has the following format:

sentence####[(target position, opinion position, sentiment)]

If there are multiple triplets in the same sentence:

sentence####[(target position, opinion position, sentiment), ..., (target position, opinion position, sentiment)]

For example:

The screen is very large and crystal clear with amazing colors and resolution .####[([1], [4], 'POS'), ([1], [7], 'POS'), ([10], [9], 'POS'), ([12], [9], 'POS')]

Acknowledgements

Our datasets originate from SemEval Challenges (Pontiki 2014; 2015; 2016). The annotation (opinion label) is derived from (Fan et al. 2019), where they already annotated opinion terms.