Alert button
Picture for I-Chun Chern

I-Chun Chern

Alert button

FELM: Benchmarking Factuality Evaluation of Large Language Models

Add code
Bookmark button
Alert button
Oct 01, 2023
Shiqi Chen, Yiran Zhao, Jinghan Zhang, I-Chun Chern, Siyang Gao, Pengfei Liu, Junxian He

Viaarxiv icon

FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios

Add code
Bookmark button
Alert button
Jul 26, 2023
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu

Figure 1 for FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Figure 2 for FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Figure 3 for FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Figure 4 for FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Viaarxiv icon

Improving Factuality of Abstractive Summarization via Contrastive Reward Learning

Add code
Bookmark button
Alert button
Jul 10, 2023
I-Chun Chern, Zhiruo Wang, Sanjan Das, Bhavuk Sharma, Pengfei Liu, Graham Neubig

Figure 1 for Improving Factuality of Abstractive Summarization via Contrastive Reward Learning
Figure 2 for Improving Factuality of Abstractive Summarization via Contrastive Reward Learning
Figure 3 for Improving Factuality of Abstractive Summarization via Contrastive Reward Learning
Viaarxiv icon

Audio-Visual Speech Enhancement and Separation by Leveraging Multi-Modal Self-Supervised Embeddings

Add code
Bookmark button
Alert button
Oct 31, 2022
I-Chun Chern, Kuo-Hsuan Hung, Yi-Ting Chen, Tassadaq Hussain, Mandar Gogate, Amir Hussain, Yu Tsao, Jen-Cheng Hou

Figure 1 for Audio-Visual Speech Enhancement and Separation by Leveraging Multi-Modal Self-Supervised Embeddings
Figure 2 for Audio-Visual Speech Enhancement and Separation by Leveraging Multi-Modal Self-Supervised Embeddings
Figure 3 for Audio-Visual Speech Enhancement and Separation by Leveraging Multi-Modal Self-Supervised Embeddings
Figure 4 for Audio-Visual Speech Enhancement and Separation by Leveraging Multi-Modal Self-Supervised Embeddings
Viaarxiv icon