Relationship extraction is the task of extracting semantic relationships from a text. Extracted relationships usually occur between two or more entities of a certain type (e.g. Person, Organisation, Location) and fall into a number of semantic categories (e.g. married to, employed by, lives in).
Capturing discriminative attributes (SemEval 2018 Task 10) is a binary classification task where participants were asked to identify whether an attribute could help discriminate between two concepts. Unlike other word similarity prediction tasks, this task focuses on the semantic differences between words.
e.g. red(attribute) can be used to discriminate apple (concept1) from banana (concept2) -> label 1
More examples:
Attribute | concept1 | concept2 | label |
---|---|---|---|
bookcase | fridge | wood | 1 |
bucket | mug | round | 0 |
angle | curve | sharp | 1 |
pelican | turtle | water | 0 |
wire | coil | metal | 0 |
Task paper: https://www.aclweb.org/anthology/S18-1117
Task Codalab: https://competitions.codalab.org/competitions/17326
The Few-Shot Relation Classification Dataset (FewRel) is a different setting from the previous datasets. This dataset consists of 70K sentences expressing 100 relations annotated by crowdworkers on Wikipedia corpus. The few-shot learning task follows the N-way K-shot meta learning setting.
The public leaderboard is available on the FewRel website.
FewRel 2 extends FewRel on (1) Adaptibility to a new domain with only a hand-ful of instances (2) Ability to detect none-of-the-above relations? The paper is at ACL Web.
The public leaderboard is available on FewRel 2 website
SemEval-2010 introduced 'Task 8 - Multi-Way Classification of Semantic Relations Between Pairs of Nominals'. The task is, given a sentence and two tagged nominals, to predict the relation between those nominals and the direction of the relation. The dataset contains nine general semantic relations together with a tenth 'OTHER' relation.
Example:
There were apples, pears and oranges in the bowl.
(content-container, pears, bowl)
The main evaluation metric used is macro-averaged F1, averaged across the nine proper relationships (i.e. excluding the OTHER relation), taking directionality of the relation into account.
Several papers have used additional data (e.g. pre-trained word embeddings, WordNet) to improve performance. The figures reported here are the highest achieved by the model using any external resources.
*: It uses external lexical resources, such as WordNet, part-of-speech tags, dependency tags, and named entity tags.
Model | F1 | Paper / Source | Code |
---|---|---|---|
BRCNN (Cai et al., 2016) | 86.3 | Bidirectional Recurrent Convolutional Neural Network for Relation Classification | |
DRNNs (Xu et al., 2016) | 86.1 | Improved Relation Classification by Deep Recurrent Neural Networks with Data Augmentation | |
depLCNN + NS (Xu et al., 2015a) | 85.6 | Semantic Relation Classification via Convolutional Neural Networks with Simple Negative Sampling | |
SDP-LSTM (Xu et al., 2015b) | 83.7 | Classifying Relations via Long Short Term Memory Networks along Shortest Dependency Path | Sshanu's Reimplementation |
DepNN (Liu et al., 2015) | 83.6 | A Dependency-Based Neural Network for Relation Classification | |
FCN (Yu et al., 2014) | 83.0 | Factor-based compositional embedding models | |
MVRNN (Socher et al., 2012) | 82.4 | Semantic Compositionality through Recursive Matrix-Vector Spaces | pratapbhanu's Reimplementation |
The standard corpus for distantly supervised relationship extraction is the New York Times (NYT) corpus, published in Riedel et al, 2010.
This contains text from the New York Times Annotated Corpus with named entities extracted from the text using the Stanford NER system and automatically linked to entities in the Freebase knowledge base. Pairs of named entities are labelled with relationship types by aligning them against facts in the Freebase knowledge base. (The process of using a separate database to provide label is known as 'distant supervision')
Example:
Elevation Partners, the $1.9 billion private equity group that was founded by Roger McNamee
(founded_by, Elevation_Partners, Roger_McNamee)
Different papers have reported various metrics since the release of the dataset, making it difficult to compare systems directly. The main metrics used are either precision at N results or plots of the precision-recall. The range of recall has increased over the years as systems improve, with earlier systems having very low precision at 30% recall.
(+) Obtained from results in the paper "Neural Relation Extraction with Selective Attention over Instances"
TACRED is a large-scale relation extraction dataset with 106,264 examples built over newswire and web text from the corpus used in the yearly TAC Knowledge Base Population (TAC KBP) challenges. Examples in TACRED cover 41 relation types as used in the TAC KBP challenges (e.g., per:schools_attended and org:members) or are labeled as no_relation if no defined relation is held. These examples are created by combining available human annotations from the TAC KBP challenges and crowdsourcing.
Example:
Billy Mays, the bearded, boisterious pitchman who, as the undisputed king of TV yell and sell, became an inlikely pop culture icon, died at his home in Tampa, Fla, on Sunday.
(per:city_of_death, Billy Mays, Tampa)
The main evaluation metric used is micro-averaged F1 over instances with proper relationships (i.e. excluding the no_relation type).
Model | F1 | Paper / Source | Code |
---|---|---|---|
Matching-the-Blanks (Baldini Soares et al., 2019) | 71.5 | Matching the Blanks: Distributional Similarity for Relation Learning | |
C-GCN + PA-LSTM (Zhang et al. 2018) | 68.2 | Graph Convolution over Pruned Dependency Trees Improves Relation Extraction | Offical |
PA-LSTM (Zhang et al, 2017) | 65.1 | Position-aware Attention and Supervised Data Improve Slot Filling | Official |