Alert button
Picture for Ching-Hsun Tseng

Ching-Hsun Tseng

Alert button

A Novel Method of Fuzzy Topic Modeling based on Transformer Processing

Sep 18, 2023
Ching-Hsun Tseng, Shin-Jye Lee, Po-Wei Cheng, Chien Lee, Chih-Chieh Hung

Figure 1 for A Novel Method of Fuzzy Topic Modeling based on Transformer Processing
Figure 2 for A Novel Method of Fuzzy Topic Modeling based on Transformer Processing
Figure 3 for A Novel Method of Fuzzy Topic Modeling based on Transformer Processing
Figure 4 for A Novel Method of Fuzzy Topic Modeling based on Transformer Processing

Topic modeling is admittedly a convenient way to monitor markets trend. Conventionally, Latent Dirichlet Allocation, LDA, is considered a must-do model to gain this type of information. By given the merit of deducing keyword with token conditional probability in LDA, we can know the most possible or essential topic. However, the results are not intuitive because the given topics cannot wholly fit human knowledge. LDA offers the first possible relevant keywords, which also brings out another problem of whether the connection is reliable based on the statistic possibility. It is also hard to decide the topic number manually in advance. As the booming trend of using fuzzy membership to cluster and using transformers to embed words, this work presents the fuzzy topic modeling based on soft clustering and document embedding from state-of-the-art transformer-based model. In our practical application in a press release monitoring, the fuzzy topic modeling gives a more natural result than the traditional output from LDA.

* Asian Journal of Information and Communications, Vol.12, No. 1, 125-140 
Viaarxiv icon

Real-time Automatic M-mode Echocardiography Measurement with Panel Attention from Local-to-Global Pixels

Aug 15, 2023
Ching-Hsun Tseng, Shao-Ju Chien, Po-Shen Wang, Shin-Jye Lee, Wei-Huan Hu, Bin Pu, Xiao-jun Zeng

Figure 1 for Real-time Automatic M-mode Echocardiography Measurement with Panel Attention from Local-to-Global Pixels
Figure 2 for Real-time Automatic M-mode Echocardiography Measurement with Panel Attention from Local-to-Global Pixels
Figure 3 for Real-time Automatic M-mode Echocardiography Measurement with Panel Attention from Local-to-Global Pixels
Figure 4 for Real-time Automatic M-mode Echocardiography Measurement with Panel Attention from Local-to-Global Pixels

Motion mode (M-mode) recording is an essential part of echocardiography to measure cardiac dimension and function. However, the current diagnosis cannot build an automatic scheme, as there are three fundamental obstructs: Firstly, there is no open dataset available to build the automation for ensuring constant results and bridging M-mode echocardiography with real-time instance segmentation (RIS); Secondly, the examination is involving the time-consuming manual labelling upon M-mode echocardiograms; Thirdly, as objects in echocardiograms occupy a significant portion of pixels, the limited receptive field in existing backbones (e.g., ResNet) composed from multiple convolution layers are inefficient to cover the period of a valve movement. Existing non-local attentions (NL) compromise being unable real-time with a high computation overhead or losing information from a simplified version of the non-local block. Therefore, we proposed RAMEM, a real-time automatic M-mode echocardiography measurement scheme, contributes three aspects to answer the problems: 1) provide MEIS, a dataset of M-mode echocardiograms for instance segmentation, to enable consistent results and support the development of an automatic scheme; 2) propose panel attention, local-to-global efficient attention by pixel-unshuffling, embedding with updated UPANets V2 in a RIS scheme toward big object detection with global receptive field; 3) develop and implement AMEM, an efficient algorithm of automatic M-mode echocardiography measurement enabling fast and accurate automatic labelling among diagnosis. The experimental results show that RAMEM surpasses existing RIS backbones (with non-local attention) in PASCAL 2012 SBD and human performances in real-time MEIS tested. The code of MEIS and dataset are available at https://github.com/hanktseng131415go/RAME.

Viaarxiv icon

EDU-level Extractive Summarization with Varying Summary Lengths

Oct 08, 2022
Yuping Wu, Ching-Hsun Tseng, Jiayu Shang, Shengzhong Mao, Goran Nenadic, Xiao-Jun Zeng

Figure 1 for EDU-level Extractive Summarization with Varying Summary Lengths
Figure 2 for EDU-level Extractive Summarization with Varying Summary Lengths
Figure 3 for EDU-level Extractive Summarization with Varying Summary Lengths
Figure 4 for EDU-level Extractive Summarization with Varying Summary Lengths

Extractive models usually formulate text summarization as extracting top-k important sentences from document as summary. Few work exploited extracting finer-grained Elementary Discourse Unit (EDU) and there is little analysis and justification for the extractive unit selection. To fill such a gap, this paper firstly conducts oracle analysis to compare the upper bound of performance for models based on EDUs and sentences. The analysis provides evidences from both theoretical and experimental perspectives to justify that EDUs make more concise and precise summary than sentences without losing salient information. Then, considering this merit of EDUs, this paper further proposes EDU-level extractive model with Varying summary Lengths (EDU-VL) and develops the corresponding learning algorithm. EDU-VL learns to encode and predict probabilities of EDUs in document, and encode EDU-level candidate summaries with different lengths based on various $k$ values and select the best candidate summary in an end-to-end training manner. Finally, the proposed and developed approach is experimented on single and multi-document benchmark datasets and shows the improved performances in comparison with the state-of-the-art models.

Viaarxiv icon

UPANets: Learning from the Universal Pixel Attention Networks

Mar 22, 2021
Ching-Hsun Tseng, Shin-Jye Lee, Jia-Nan Feng, Shengzhong Mao, Yu-Ping Wu, Jia-Yu Shang, Mou-Chung Tseng, Xiao-Jun Zeng

Figure 1 for UPANets: Learning from the Universal Pixel Attention Networks
Figure 2 for UPANets: Learning from the Universal Pixel Attention Networks
Figure 3 for UPANets: Learning from the Universal Pixel Attention Networks
Figure 4 for UPANets: Learning from the Universal Pixel Attention Networks

Among image classification, skip and densely-connection-based networks have dominated most leaderboards. Recently, from the successful development of multi-head attention in natural language processing, it is sure that now is a time of either using a Transformer-like model or hybrid CNNs with attention. However, the former need a tremendous resource to train, and the latter is in the perfect balance in this direction. In this work, to make CNNs handle global and local information, we proposed UPANets, which equips channel-wise attention with a hybrid skip-densely-connection structure. Also, the extreme-connection structure makes UPANets robust with a smoother loss landscape. In experiments, UPANets surpassed most well-known and widely-used SOTAs with an accuracy of 96.47% in Cifar-10, 80.29% in Cifar-100, and 67.67% in Tiny Imagenet. Most importantly, these performances have high parameters efficiency and only trained in one customer-based GPU. We share implementing code of UPANets in https://github.com/hanktseng131415go/UPANets.

Viaarxiv icon