Alert button
Picture for Fuqin Yan

Fuqin Yan

Alert button

KDSTM: Neural Semi-supervised Topic Modeling with Knowledge Distillation

Jul 04, 2023
Weijie Xu, Xiaoyu Jiang, Jay Desai, Bin Han, Fuqin Yan, Francis Iannacci

Figure 1 for KDSTM: Neural Semi-supervised Topic Modeling with Knowledge Distillation
Figure 2 for KDSTM: Neural Semi-supervised Topic Modeling with Knowledge Distillation
Figure 3 for KDSTM: Neural Semi-supervised Topic Modeling with Knowledge Distillation
Figure 4 for KDSTM: Neural Semi-supervised Topic Modeling with Knowledge Distillation

In text classification tasks, fine tuning pretrained language models like BERT and GPT-3 yields competitive accuracy; however, both methods require pretraining on large text datasets. In contrast, general topic modeling methods possess the advantage of analyzing documents to extract meaningful patterns of words without the need of pretraining. To leverage topic modeling's unsupervised insights extraction on text classification tasks, we develop the Knowledge Distillation Semi-supervised Topic Modeling (KDSTM). KDSTM requires no pretrained embeddings, few labeled documents and is efficient to train, making it ideal under resource constrained settings. Across a variety of datasets, our method outperforms existing supervised topic modeling methods in classification accuracy, robustness and efficiency and achieves similar performance compare to state of the art weakly supervised text classification methods.

* ICLR 2022 Workshop PML4DC  
* 12 pages, 4 figures, ICLR 2022 Workshop 
Viaarxiv icon