Alert button
Picture for Jinwook Seo

Jinwook Seo

Alert button

Computational Approaches for App-to-App Retrieval and Design Consistency Check

Sep 19, 2023
Seokhyeon Park, Wonjae Kim, Young-Ho Kim, Jinwook Seo

Figure 1 for Computational Approaches for App-to-App Retrieval and Design Consistency Check
Figure 2 for Computational Approaches for App-to-App Retrieval and Design Consistency Check
Figure 3 for Computational Approaches for App-to-App Retrieval and Design Consistency Check
Figure 4 for Computational Approaches for App-to-App Retrieval and Design Consistency Check

Extracting semantic representations from mobile user interfaces (UI) and using the representations for designers' decision-making processes have shown the potential to be effective computational design support tools. Current approaches rely on machine learning models trained on small-sized mobile UI datasets to extract semantic vectors and use screenshot-to-screenshot comparison to retrieve similar-looking UIs given query screenshots. However, the usability of these methods is limited because they are often not open-sourced and have complex training pipelines for practitioners to follow, and are unable to perform screenshot set-to-set (i.e., app-to-app) retrieval. To this end, we (1) employ visual models trained with large web-scale images and test whether they could extract a UI representation in a zero-shot way and outperform existing specialized models, and (2) use mathematically founded methods to enable app-to-app retrieval and design consistency analysis. Our experiments show that our methods not only improve upon previous retrieval models but also enable multiple new applications.

* AI & HCI Workshop at the ICML 2023 
Viaarxiv icon

CLAMS: A Cluster Ambiguity Measure for Estimating Perceptual Variability in Visual Clustering

Aug 11, 2023
Hyeon Jeon, Ghulam Jilani Quadri, Hyunwook Lee, Paul Rosen, Danielle Albers Szafir, Jinwook Seo

Visual clustering is a common perceptual task in scatterplots that supports diverse analytics tasks (e.g., cluster identification). However, even with the same scatterplot, the ways of perceiving clusters (i.e., conducting visual clustering) can differ due to the differences among individuals and ambiguous cluster boundaries. Although such perceptual variability casts doubt on the reliability of data analysis based on visual clustering, we lack a systematic way to efficiently assess this variability. In this research, we study perceptual variability in conducting visual clustering, which we call Cluster Ambiguity. To this end, we introduce CLAMS, a data-driven visual quality measure for automatically predicting cluster ambiguity in monochrome scatterplots. We first conduct a qualitative study to identify key factors that affect the visual separation of clusters (e.g., proximity or size difference between clusters). Based on study findings, we deploy a regression module that estimates the human-judged separability of two clusters. Then, CLAMS predicts cluster ambiguity by analyzing the aggregated results of all pairwise separability between clusters that are generated by the module. CLAMS outperforms widely-used clustering techniques in predicting ground truth cluster ambiguity. Meanwhile, CLAMS exhibits performance on par with human annotators. We conclude our work by presenting two applications for optimizing and benchmarking data mining techniques using CLAMS. The interactive demo of CLAMS is available at clusterambiguity.dev.

* IEEE Transactions on Visualization and Computer Graphics (TVCG) (Proc. IEEE VIS 2023); equally contributed by Hyeon Jeon and Ghulam Jilani Quadri 
Viaarxiv icon

ZADU: A Python Library for Evaluating the Reliability of Dimensionality Reduction Embeddings

Aug 11, 2023
Hyeon Jeon, Aeri Cho, Jinhwa Jang, Soohyun Lee, Jake Hyun, Hyung-Kwon Ko, Jaemin Jo, Jinwook Seo

Figure 1 for ZADU: A Python Library for Evaluating the Reliability of Dimensionality Reduction Embeddings
Figure 2 for ZADU: A Python Library for Evaluating the Reliability of Dimensionality Reduction Embeddings
Figure 3 for ZADU: A Python Library for Evaluating the Reliability of Dimensionality Reduction Embeddings

Dimensionality reduction (DR) techniques inherently distort the original structure of input high-dimensional data, producing imperfect low-dimensional embeddings. Diverse distortion measures have thus been proposed to evaluate the reliability of DR embeddings. However, implementing and executing distortion measures in practice has so far been time-consuming and tedious. To address this issue, we present ZADU, a Python library that provides distortion measures. ZADU is not only easy to install and execute but also enables comprehensive evaluation of DR embeddings through three key features. First, the library covers a wide range of distortion measures. Second, it automatically optimizes the execution of distortion measures, substantially reducing the running time required to execute multiple measures. Last, the library informs how individual points contribute to the overall distortions, facilitating the detailed analysis of DR embeddings. By simulating a real-world scenario of optimizing DR embeddings, we verify that our optimization scheme substantially reduces the time required to execute distortion measures. Finally, as an application of ZADU, we present another library called ZADUVis that allows users to easily create distortion visualizations that depict the extent to which each region of an embedding suffers from distortions.

* 2023 IEEE Visualization and Visual Analytics (IEEE VIS 2023) Short paper 
Viaarxiv icon

Classes are not Clusters: Improving Label-based Evaluation of Dimensionality Reduction

Aug 11, 2023
Hyeon Jeon, Yun-Hsin Kuo, Michaël Aupetit, Kwan-Liu Ma, Jinwook Seo

Figure 1 for Classes are not Clusters: Improving Label-based Evaluation of Dimensionality Reduction
Figure 2 for Classes are not Clusters: Improving Label-based Evaluation of Dimensionality Reduction
Figure 3 for Classes are not Clusters: Improving Label-based Evaluation of Dimensionality Reduction
Figure 4 for Classes are not Clusters: Improving Label-based Evaluation of Dimensionality Reduction

A common way to evaluate the reliability of dimensionality reduction (DR) embeddings is to quantify how well labeled classes form compact, mutually separated clusters in the embeddings. This approach is based on the assumption that the classes stay as clear clusters in the original high-dimensional space. However, in reality, this assumption can be violated; a single class can be fragmented into multiple separated clusters, and multiple classes can be merged into a single cluster. We thus cannot always assure the credibility of the evaluation using class labels. In this paper, we introduce two novel quality measures -- Label-Trustworthiness and Label-Continuity (Label-T&C) -- advancing the process of DR evaluation based on class labels. Instead of assuming that classes are well-clustered in the original space, Label-T&C work by (1) estimating the extent to which classes form clusters in the original and embedded spaces and (2) evaluating the difference between the two. A quantitative evaluation showed that Label-T&C outperform widely used DR evaluation measures (e.g., Trustworthiness and Continuity, Kullback-Leibler divergence) in terms of the accuracy in assessing how well DR embeddings preserve the cluster structure, and are also scalable. Moreover, we present case studies demonstrating that Label-T&C can be successfully used for revealing the intrinsic characteristics of DR techniques and their hyperparameters.

* IEEE Transactions on Visualization and Computer Graphics (TVCG) (Proc. IEEE VIS 2023) 
Viaarxiv icon

Sanity Check for External Clustering Validation Benchmarks using Internal Validation Measures

Sep 20, 2022
Hyeon Jeon, Michael Aupetit, DongHwa Shin, Aeri Cho, Seokhyeon Park, Jinwook Seo

Figure 1 for Sanity Check for External Clustering Validation Benchmarks using Internal Validation Measures
Figure 2 for Sanity Check for External Clustering Validation Benchmarks using Internal Validation Measures
Figure 3 for Sanity Check for External Clustering Validation Benchmarks using Internal Validation Measures
Figure 4 for Sanity Check for External Clustering Validation Benchmarks using Internal Validation Measures

We address the lack of reliability in benchmarking clustering techniques based on labeled datasets. A standard scheme in external clustering validation is to use class labels as ground truth clusters, based on the assumption that each class forms a single, clearly separated cluster. However, as such cluster-label matching (CLM) assumption often breaks, the lack of conducting a sanity check for the CLM of benchmark datasets casts doubt on the validity of external validations. Still, evaluating the degree of CLM is challenging. For example, internal clustering validation measures can be used to quantify CLM within the same dataset to evaluate its different clusterings but are not designed to compare clusterings of different datasets. In this work, we propose a principled way to generate between-dataset internal measures that enable the comparison of CLM across datasets. We first determine four axioms for between-dataset internal measures, complementing Ackerman and Ben-David's within-dataset axioms. We then propose processes to generalize internal measures to fulfill these new axioms, and use them to extend the widely used Calinski-Harabasz index for between-dataset CLM evaluation. Through quantitative experiments, we (1) verify the validity and necessity of the generalization processes and (2) show that the proposed between-dataset Calinski-Harabasz index accurately evaluates CLM across datasets. Finally, we demonstrate the importance of evaluating CLM of benchmark datasets before conducting external validation.

* Datasets available on https://github.com/hj-n/labeled-datasets 
Viaarxiv icon

Uniform Manifold Approximation with Two-phase Optimization

May 01, 2022
Hyeon Jeon, Hyung-Kwon Ko, Soohyun Lee, Jaemin Jo, Jinwook Seo

Figure 1 for Uniform Manifold Approximation with Two-phase Optimization
Figure 2 for Uniform Manifold Approximation with Two-phase Optimization
Figure 3 for Uniform Manifold Approximation with Two-phase Optimization

We introduce Uniform Manifold Approximation with Two-phase Optimization (UMATO), a dimensionality reduction (DR) technique that improves UMAP to capture the global structure of high-dimensional data more accurately. In UMATO, optimization is divided into two phases so that the resulting embeddings can depict the global structure reliably while preserving the local structure with sufficient accuracy. As the first phase, hub points are identified and projected to construct a skeletal layout for the global structure. In the second phase, the remaining points are added to the embedding preserving the regional characteristics of local areas. Through quantitative experiments, we found that UMATO (1) outperformed widely used DR techniques in preserving the global structure while (2) producing competitive accuracy in representing the local structure. We also verified that UMATO is preferable in terms of robustness over diverse initialization methods, number of epochs, and subsampling techniques.

* Under review. Hyeon Jeon and Hyung-Kwon Ko equally contributed to this work 
Viaarxiv icon

Exploring the Effects of AI-assisted Emotional Support Processes in Online Mental Health Community

Feb 21, 2022
Donghoon Shin, Subeen Park, Esther Hehsun Kim, Soomin Kim, Jinwook Seo, Hwajung Hong

Figure 1 for Exploring the Effects of AI-assisted Emotional Support Processes in Online Mental Health Community
Figure 2 for Exploring the Effects of AI-assisted Emotional Support Processes in Online Mental Health Community

Social support in online mental health communities (OMHCs) is an effective and accessible way of managing mental wellbeing. In this process, sharing emotional supports is considered crucial to the thriving social supports in OMHCs, yet often difficult for both seekers and providers. To support empathetic interactions, we design an AI-infused workflow that allows users to write emotional supporting messages to other users' posts based on the elicitation of the seeker's emotion and contextual keywords from writing. Based on a preliminary user study (N = 10), we identified that the system helped seekers to clarify emotion and describe text concretely while writing a post. Providers could also learn how to react empathetically to the post. Based on these results, we suggest design implications for our proposed system.

* Accepted at CHI 2022 (Late-Breaking Work) 
Viaarxiv icon