Header menu link for other important links
X
Label informed hierarchical transformers for sequential sentence classification in scientific abstracts
Takola Y.S.S.S., Aluru S.S., Vallabhajosyula A., Sanyal D.K., Das P.P.,
Published in John Wiley and Sons Inc
2023
Volume: 40
   
Issue: 6
Abstract
Segmenting scientific abstracts into discourse categories like background, objective, method, result, and conclusion is useful in many downstream tasks like search, recommendation and summarization. This task of classifying each sentence in the abstract into one of a given set of discourse categories is called sequential sentence classification. Existing machine learning-based approaches to this problem consider the content of only the abstract to obtain the neural representation of each sentence, which is then labelled with a discourse category. But this ignores the semantic information offered by the discourse labels themselves. In this paper, we propose LIHT, Label Informed Hierarchical Transformers – a method for sequential sentence classification that explicitly and hierarchically exploits the semantic information in the labels to learn label-aware neural sentence representations. The hierarchical model helps to capture not only the fine-grained interactions between the discourse labels and the words in the abstract at the sentence level but also the potential dependencies that may exist in the label sequence. Thus, LIHT generates label-aware contextual sentence representations that are then labelled with a conditional random field. We evaluate LIHT on three publicly available datasets, namely, PUBMED-RCT, NICTA-PIBOSO and CSAbstract. The incremental gain in F1-score in all the three cases over the respective state-of-the-art approaches is around (Formula presented.). Though the gains are modest, LIHT establishes a new performance benchmark for this task and is a novel technique of independent interest. We also perform an ablation study to identify the contribution of each component of LIHT in the observed performance, and a case study to visualize the roles of the different components of our model. © 2023 John Wiley & Sons Ltd.
About the journal
JournalExpert Systems
PublisherJohn Wiley and Sons Inc
ISSN02664720
Open AccessNo