Alert button
Picture for Inho Kang

Inho Kang

Alert button

A Versatile Framework for Evaluating Ranked Lists in terms of Group Fairness and Relevance

Add code
Bookmark button
Alert button
Apr 01, 2022
Tetsuya Sakai, Jin Young Kim, Inho Kang

Figure 1 for A Versatile Framework for Evaluating Ranked Lists in terms of Group Fairness and Relevance
Figure 2 for A Versatile Framework for Evaluating Ranked Lists in terms of Group Fairness and Relevance
Figure 3 for A Versatile Framework for Evaluating Ranked Lists in terms of Group Fairness and Relevance
Figure 4 for A Versatile Framework for Evaluating Ranked Lists in terms of Group Fairness and Relevance
Viaarxiv icon

What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers

Add code
Bookmark button
Alert button
Sep 10, 2021
Boseop Kim, HyoungSeok Kim, Sang-Woo Lee, Gichang Lee, Donghyun Kwak, Dong Hyeon Jeon, Sunghyun Park, Sungju Kim, Seonhoon Kim, Dongpil Seo, Heungsub Lee, Minyoung Jeong, Sungjae Lee, Minsub Kim, Suk Hyun Ko, Seokhun Kim, Taeyong Park, Jinuk Kim, Soyoung Kang, Na-Hyeon Ryu, Kang Min Yoo, Minsuk Chang, Soobin Suh, Sookyo In, Jinseong Park, Kyungduk Kim, Hiun Kim, Jisu Jeong, Yong Goo Yeo, Donghoon Ham, Dongju Park, Min Young Lee, Jaewook Kang, Inho Kang, Jung-Woo Ha, Woomyoung Park, Nako Sung

Figure 1 for What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
Figure 2 for What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
Figure 3 for What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
Figure 4 for What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
Viaarxiv icon

Self-supervised pre-training and contrastive representation learning for multiple-choice video QA

Add code
Bookmark button
Alert button
Sep 17, 2020
Seonhoon Kim, Seohyeong Jeong, Eunbyul Kim, Inho Kang, Nojun Kwak

Figure 1 for Self-supervised pre-training and contrastive representation learning for multiple-choice video QA
Figure 2 for Self-supervised pre-training and contrastive representation learning for multiple-choice video QA
Figure 3 for Self-supervised pre-training and contrastive representation learning for multiple-choice video QA
Figure 4 for Self-supervised pre-training and contrastive representation learning for multiple-choice video QA
Viaarxiv icon

Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information

Add code
Bookmark button
Alert button
Nov 02, 2018
Seonhoon Kim, Inho Kang, Nojun Kwak

Figure 1 for Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information
Figure 2 for Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information
Figure 3 for Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information
Figure 4 for Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information
Viaarxiv icon