Relation Extraction (RE) refers to extracting the relation triples in the input text. Existing neural work based systems for RE rely heavily on manually labeled training data, but there are still a lot of domains where sufficient labeled data does not exist. Inspired by the distance-based few-shot named entity recognition methods, we put forward the definition of the few-shot RE task based on the sequence tagging joint extraction approaches, and propose a few-shot RE framework for the task. Besides, we apply two actual sequence tagging models to our framework (called Few-shot TPLinker and Few-shot BiTT), and achieves solid results on two few-shot RE tasks constructed from a public dataset.
Joint extraction refers to extracting triples, composed of entities and relations, simultaneously from the text with a single model. However, most existing methods fail to extract all triples accurately and efficiently from sentences with overlapping issue, i.e., the same entity is included in multiple triples. In this paper, we propose a novel scheme called Bidirectional Tree Tagging (BiTT) to label overlapping triples in text. In BiTT, the triples with the same relation category in a sentence are especially represented as two binary trees, each of which is converted into a word-level tags sequence to label each word. Based on BiTT scheme, we develop an end-to-end extraction framework to predict the BiTT tags and further extract triples efficiently. We adopt the Bi-LSTM and the BERT as the encoder in our framework respectively, and obtain promising results in public English as well as Chinese datasets.
Joint extraction refers to extracting triples, composed of entities and relations, simultaneously from the text with a single model, but the existing methods rarely work well on sentences with overlapping issue, i.e., the same entity is included in multiple triples. In this paper, we propose a novel Bidirectional Tree Tagging (BiTT) scheme to label overlapping triples in the text. In a sentence, the triples with the same relation category are especially represented as two binary trees, each of which is converted into a word-level tags sequence to label each word. Based on our BiTT scheme, we develop an end-to-end classification framework to predict the BiTT tags. We adopt the Bi-LSTM layers and a pre-trained BERT encoder respectively as its encoder module, and obtain promising results in a public English dataset as well as a Chinese one. The source code is publicly available at https://anonymous/for/review.