Alert button
Picture for Daeyoung Kim

Daeyoung Kim

Alert button

Instance Segmentation under Occlusions via Location-aware Copy-Paste Data Augmentation

Oct 27, 2023
Son Nguyen, Mikel Lainsa, Hung Dao, Daeyoung Kim, Giang Nguyen

Occlusion is a long-standing problem in computer vision, particularly in instance segmentation. ACM MMSports 2023 DeepSportRadar has introduced a dataset that focuses on segmenting human subjects within a basketball context and a specialized evaluation metric for occlusion scenarios. Given the modest size of the dataset and the highly deformable nature of the objects to be segmented, this challenge demands the application of robust data augmentation techniques and wisely-chosen deep learning architectures. Our work (ranked 1st in the competition) first proposes a novel data augmentation technique, capable of generating more training samples with wider distribution. Then, we adopt a new architecture - Hybrid Task Cascade (HTC) framework with CBNetV2 as backbone and MaskIoU head to improve segmentation performance. Furthermore, we employ a Stochastic Weight Averaging (SWA) training strategy to improve the model's generalization. As a result, we achieve a remarkable occlusion score (OM) of 0.533 on the challenge dataset, securing the top-1 position on the leaderboard. Source code is available at this https://github.com/nguyendinhson-kaist/MMSports23-Seg-AutoID.

Viaarxiv icon

TeSS: Zero-Shot Classification via Textual Similarity Comparison with Prompting using Sentence Encoder

Dec 20, 2022
Jimin Hong, Jungsoo Park, Daeyoung Kim, Seongjae Choi, Bokyung Son, Jaewook Kang

Figure 1 for TeSS: Zero-Shot Classification via Textual Similarity Comparison with Prompting using Sentence Encoder
Figure 2 for TeSS: Zero-Shot Classification via Textual Similarity Comparison with Prompting using Sentence Encoder
Figure 3 for TeSS: Zero-Shot Classification via Textual Similarity Comparison with Prompting using Sentence Encoder
Figure 4 for TeSS: Zero-Shot Classification via Textual Similarity Comparison with Prompting using Sentence Encoder

We introduce TeSS (Text Similarity Comparison using Sentence Encoder), a framework for zero-shot classification where the assigned label is determined by the embedding similarity between the input text and each candidate label prompt. We leverage representations from sentence encoders optimized to locate semantically similar samples closer to each other in embedding space during pre-training. The label prompt embeddings serve as prototypes of their corresponding class clusters. Furthermore, to compensate for the potentially poorly descriptive labels in their original format, we retrieve semantically similar sentences from external corpora and additionally use them with the original label prompt (TeSS-R). TeSS outperforms strong baselines on various closed-set and open-set classification datasets under zero-shot setting, with further gains when combined with label prompt diversification through retrieval. These results are robustly attained to verbalizer variations, an ancillary benefit of using a bi-encoder. Altogether, our method serves as a reliable baseline for zero-shot classification and a simple interface to assess the quality of sentence encoders.

* 9 pages, 3 figures 
Viaarxiv icon

D-TensoRF: Tensorial Radiance Fields for Dynamic Scenes

Dec 06, 2022
Hankyu Jang, Daeyoung Kim

Figure 1 for D-TensoRF: Tensorial Radiance Fields for Dynamic Scenes
Figure 2 for D-TensoRF: Tensorial Radiance Fields for Dynamic Scenes
Figure 3 for D-TensoRF: Tensorial Radiance Fields for Dynamic Scenes
Figure 4 for D-TensoRF: Tensorial Radiance Fields for Dynamic Scenes

Neural radiance field (NeRF) attracts attention as a promising approach to reconstructing the 3D scene. As NeRF emerges, subsequent studies have been conducted to model dynamic scenes, which include motions or topological changes. However, most of them use an additional deformation network, slowing down the training and rendering speed. Tensorial radiance field (TensoRF) recently shows its potential for fast, high-quality reconstruction of static scenes with compact model size. In this paper, we present D-TensoRF, a tensorial radiance field for dynamic scenes, enabling novel view synthesis at a specific time. We consider the radiance field of a dynamic scene as a 5D tensor. The 5D tensor represents a 4D grid in which each axis corresponds to X, Y, Z, and time and has 1D multi-channel features per element. Similar to TensoRF, we decompose the grid either into rank-one vector components (CP decomposition) or low-rank matrix components (newly proposed MM decomposition). We also use smoothing regularization to reflect the relationship between features at different times (temporal dependency). We conduct extensive evaluations to analyze our models. We show that D-TensoRF with CP decomposition and MM decomposition both have short training times and significantly low memory footprints with quantitatively and qualitatively competitive rendering results in comparison to the state-of-the-art methods in 3D dynamic scene modeling.

* 21 pages, 11 figures 
Viaarxiv icon

RedPen: Region- and Reason-Annotated Dataset of Unnatural Speech

Oct 26, 2022
Kyumin Park, Keon Lee, Daeyoung Kim, Dongyeop Kang

Figure 1 for RedPen: Region- and Reason-Annotated Dataset of Unnatural Speech
Figure 2 for RedPen: Region- and Reason-Annotated Dataset of Unnatural Speech
Figure 3 for RedPen: Region- and Reason-Annotated Dataset of Unnatural Speech
Figure 4 for RedPen: Region- and Reason-Annotated Dataset of Unnatural Speech

Even with recent advances in speech synthesis models, the evaluation of such models is based purely on human judgement as a single naturalness score, such as the Mean Opinion Score (MOS). The score-based metric does not give any further information about which parts of speech are unnatural or why human judges believe they are unnatural. We present a novel speech dataset, RedPen, with human annotations on unnatural speech regions and their corresponding reasons. RedPen consists of 180 synthesized speeches with unnatural regions annotated by crowd workers; These regions are then reasoned and categorized by error types, such as voice trembling and background noise. We find that our dataset shows a better explanation for unnatural speech regions than the model-driven unnaturalness prediction. Our analysis also shows that each model includes different types of error types. Summing up, our dataset successfully shows the possibility that various error regions and types lie under the single naturalness score. We believe that our dataset will shed light on the evaluation and development of more interpretable speech models in the future. Our dataset will be publicly available upon acceptance.

* Submitted to ICASSP 2023 
Viaarxiv icon

Towards the Practical Utility of Federated Learning in the Medical Domain

Jul 14, 2022
Seongjun Yang, Hyeonji Hwang, Daeyoung Kim, Radhika Dua, Jong-Yeup Kim, Eunho Yang, Edward Choi

Figure 1 for Towards the Practical Utility of Federated Learning in the Medical Domain
Figure 2 for Towards the Practical Utility of Federated Learning in the Medical Domain
Figure 3 for Towards the Practical Utility of Federated Learning in the Medical Domain
Figure 4 for Towards the Practical Utility of Federated Learning in the Medical Domain

Federated learning (FL) is an active area of research. One of the most suitable areas for adopting FL is the medical domain, where patient privacy must be respected. Previous research, however, does not fully consider who will most likely use FL in the medical domain. It is not the hospitals who are eager to adopt FL, but the service providers such as IT companies who want to develop machine learning models with real patient records. Moreover, service providers would prefer to focus on maximizing the performance of the models at the lowest cost possible. In this work, we propose empirical benchmarks of FL methods considering both performance and monetary cost with three real-world datasets: electronic health records, skin cancer images, and electrocardiogram datasets. We also propose Federated learning with Proximal regularization eXcept local Normalization (FedPxN), which, using a simple combination of FedProx and FedBN, outperforms all other FL algorithms while consuming only slightly more power than the most power efficient method.

Viaarxiv icon

DailyTalk: Spoken Dialogue Dataset for Conversational Text-to-Speech

Jul 03, 2022
Keon Lee, Kyumin Park, Daeyoung Kim

Figure 1 for DailyTalk: Spoken Dialogue Dataset for Conversational Text-to-Speech
Figure 2 for DailyTalk: Spoken Dialogue Dataset for Conversational Text-to-Speech
Figure 3 for DailyTalk: Spoken Dialogue Dataset for Conversational Text-to-Speech
Figure 4 for DailyTalk: Spoken Dialogue Dataset for Conversational Text-to-Speech

The majority of current TTS datasets, which are collections of individual utterances, contain few conversational aspects in terms of both style and metadata. In this paper, we introduce DailyTalk, a high-quality conversational speech dataset designed for Text-to-Speech. We sampled, modified, and recorded 2,541 dialogues from the open-domain dialogue dataset DailyDialog which are adequately long to represent context of each dialogue. During the data construction step, we maintained attributes distribution originally annotated in DailyDialog to support diverse dialogue in DailyTalk. On top of our dataset, we extend prior work as our baseline, where a non-autoregressive TTS is conditioned on historical information in a dialog. We gather metadata so that a TTS model can learn historical dialog information, the key to generating context-aware speech. From the baseline experiment results, we show that DailyTalk can be used to train neural text-to-speech models, and our baseline can represent contextual information. The DailyTalk dataset and baseline code are freely available for academic use with CC-BY-SA 4.0 license.

* 10 pages, 3 figures, 5 tables. Submitted to NeurIPS 2022 Datasets and Benchmarks 
Viaarxiv icon

BiasEnsemble: Revisiting the Importance of Amplifying Bias for Debiasing

May 29, 2022
Jungsoo Lee, Jeonghoon Park, Daeyoung Kim, Juyoung Lee, Edward Choi, Jaegul Choo

Figure 1 for BiasEnsemble: Revisiting the Importance of Amplifying Bias for Debiasing
Figure 2 for BiasEnsemble: Revisiting the Importance of Amplifying Bias for Debiasing
Figure 3 for BiasEnsemble: Revisiting the Importance of Amplifying Bias for Debiasing
Figure 4 for BiasEnsemble: Revisiting the Importance of Amplifying Bias for Debiasing

In image classification, "debiasing" aims to train a classifier to be less susceptible to dataset bias, the strong correlation between peripheral attributes of data samples and a target class. For example, even if the frog class in the dataset mainly consists of frog images with a swamp background (i.e., bias-aligned samples), a debiased classifier should be able to correctly classify a frog at a beach (i.e., bias-conflicting samples). Recent debiasing approaches commonly use two components for debiasing, a biased model $f_B$ and a debiased model $f_D$. $f_B$ is trained to focus on bias-aligned samples while $f_D$ is mainly trained with bias-conflicting samples by concentrating on samples which $f_B$ fails to learn, leading $f_D$ to be less susceptible to the dataset bias. While the state-of-the-art debiasing techniques have aimed to better train $f_D$, we focus on training $f_B$, an overlooked component until now. Our empirical analysis reveals that removing the bias-conflicting samples from the training set for $f_B$ is important for improving the debiasing performance of $f_D$. This is due to the fact that the bias-conflicting samples work as noisy samples for amplifying the bias for $f_B$. To this end, we propose a novel biased sample selection method BiasEnsemble which removes the bias-conflicting samples via leveraging additional biased models to construct a bias-amplified dataset for training $f_B$. Our simple yet effective approach can be directly applied to existing reweighting-based debiasing approaches, obtaining consistent performance boost and achieving the state-of-the-art performance on both synthetic and real-world datasets.

Viaarxiv icon

Uncertainty-Aware Text-to-Program for Question Answering on Structured Electronic Health Records

Mar 14, 2022
Daeyoung Kim, Seongsu Bae, Seungho Kim, Edward Choi

Figure 1 for Uncertainty-Aware Text-to-Program for Question Answering on Structured Electronic Health Records
Figure 2 for Uncertainty-Aware Text-to-Program for Question Answering on Structured Electronic Health Records
Figure 3 for Uncertainty-Aware Text-to-Program for Question Answering on Structured Electronic Health Records
Figure 4 for Uncertainty-Aware Text-to-Program for Question Answering on Structured Electronic Health Records

Question Answering on Electronic Health Records (EHR-QA) has a significant impact on the healthcare domain, and it is being actively studied. Previous research on structured EHR-QA focuses on converting natural language queries into query language such as SQL or SPARQL (NLQ2Query), so the problem scope is limited to pre-defined data types by the specific query language. In order to expand the EHR-QA task beyond this limitation to handle multi-modal medical data and solve complex inference in the future, more primitive systemic language is needed. In this paper, we design the program-based model (NLQ2Program) for EHR-QA as the first step towards the future direction. We tackle MIMICSPARQL*, the graph-based EHR-QA dataset, via a program-based approach in a semi-supervised manner in order to overcome the absence of gold programs. Without the gold program, our proposed model shows comparable performance to the previous state-of-the-art model, which is an NLQ2Query model (0.9\% gain). In addition, for a reliable EHR-QA model, we apply the uncertainty decomposition method to measure the ambiguity in the input question. We empirically confirmed data uncertainty is most indicative of the ambiguity in the input question.

* Accepted into CHIL 2022 
Viaarxiv icon

Question Answering for Complex Electronic Health Records Database using Unified Encoder-Decoder Architecture

Nov 14, 2021
Seongsu Bae, Daeyoung Kim, Jiho Kim, Edward Choi

Figure 1 for Question Answering for Complex Electronic Health Records Database using Unified Encoder-Decoder Architecture
Figure 2 for Question Answering for Complex Electronic Health Records Database using Unified Encoder-Decoder Architecture
Figure 3 for Question Answering for Complex Electronic Health Records Database using Unified Encoder-Decoder Architecture
Figure 4 for Question Answering for Complex Electronic Health Records Database using Unified Encoder-Decoder Architecture

An intelligent machine that can answer human questions based on electronic health records (EHR-QA) has a great practical value, such as supporting clinical decisions, managing hospital administration, and medical chatbots. Previous table-based QA studies focusing on translating natural questions into table queries (NLQ2SQL), however, suffer from the unique nature of EHR data due to complex and specialized medical terminology, hence increased decoding difficulty. In this paper, we design UniQA, a unified encoder-decoder architecture for EHR-QA where natural language questions are converted to queries such as SQL or SPARQL. We also propose input masking (IM), a simple and effective method to cope with complex medical terms and various typos and better learn the SQL/SPARQL syntax. Combining the unified architecture with an effective auxiliary training objective, UniQA demonstrated a significant performance improvement against the previous state-of-the-art model for MIMICSQL* (14.2% gain), the most complex NLQ2SQL dataset in the EHR domain, and its typo-ridden versions (approximately 28.8% gain). In addition, we confirmed consistent results for the graph-based EHR-QA dataset, MIMICSPARQL*.

* Proc. of Machine Learning for Health (ML4H) 2021 (Oral Spotlight) 
Viaarxiv icon