Alert button

"Text": models, code, and papers
Alert button

Eye-SpatialNet: Spatial Information Extraction from Ophthalmology Notes

May 19, 2023
Surabhi Datta, Tasneem Kaochar, Hio Cheng Lam, Nelly Nwosu, Luca Giancardo, Alice Z. Chuang, Robert M. Feldman, Kirk Roberts

Figure 1 for Eye-SpatialNet: Spatial Information Extraction from Ophthalmology Notes
Figure 2 for Eye-SpatialNet: Spatial Information Extraction from Ophthalmology Notes
Figure 3 for Eye-SpatialNet: Spatial Information Extraction from Ophthalmology Notes
Figure 4 for Eye-SpatialNet: Spatial Information Extraction from Ophthalmology Notes
Viaarxiv icon

Phonetic and Prosody-aware Self-supervised Learning Approach for Non-native Fluency Scoring

May 19, 2023
Kaiqi Fu, Shaojun Gao, Shuju Shi, Xiaohai Tian, Wei Li, Zejun Ma

Figure 1 for Phonetic and Prosody-aware Self-supervised Learning Approach for Non-native Fluency Scoring
Figure 2 for Phonetic and Prosody-aware Self-supervised Learning Approach for Non-native Fluency Scoring
Figure 3 for Phonetic and Prosody-aware Self-supervised Learning Approach for Non-native Fluency Scoring
Figure 4 for Phonetic and Prosody-aware Self-supervised Learning Approach for Non-native Fluency Scoring
Viaarxiv icon

TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks

May 19, 2023
Shubhra Kanti Karmaker Santu, Dongji Feng

Figure 1 for TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks
Viaarxiv icon

Target-Aware Spatio-Temporal Reasoning via Answering Questions in Dynamics Audio-Visual Scenarios

May 21, 2023
Yuanyuan Jiang, Jianqin Yin

Figure 1 for Target-Aware Spatio-Temporal Reasoning via Answering Questions in Dynamics Audio-Visual Scenarios
Figure 2 for Target-Aware Spatio-Temporal Reasoning via Answering Questions in Dynamics Audio-Visual Scenarios
Figure 3 for Target-Aware Spatio-Temporal Reasoning via Answering Questions in Dynamics Audio-Visual Scenarios
Figure 4 for Target-Aware Spatio-Temporal Reasoning via Answering Questions in Dynamics Audio-Visual Scenarios
Viaarxiv icon

PINA: Leveraging Side Information in eXtreme Multi-label Classification via Predicted Instance Neighborhood Aggregation

May 21, 2023
Eli Chien, Jiong Zhang, Cho-Jui Hsieh, Jyun-Yu Jiang, Wei-Cheng Chang, Olgica Milenkovic, Hsiang-Fu Yu

Figure 1 for PINA: Leveraging Side Information in eXtreme Multi-label Classification via Predicted Instance Neighborhood Aggregation
Figure 2 for PINA: Leveraging Side Information in eXtreme Multi-label Classification via Predicted Instance Neighborhood Aggregation
Figure 3 for PINA: Leveraging Side Information in eXtreme Multi-label Classification via Predicted Instance Neighborhood Aggregation
Figure 4 for PINA: Leveraging Side Information in eXtreme Multi-label Classification via Predicted Instance Neighborhood Aggregation
Viaarxiv icon

GMD: Controllable Human Motion Synthesis via Guided Diffusion Models

May 21, 2023
Korrawe Karunratanakul, Konpat Preechakul, Supasorn Suwajanakorn, Siyu Tang

Figure 1 for GMD: Controllable Human Motion Synthesis via Guided Diffusion Models
Figure 2 for GMD: Controllable Human Motion Synthesis via Guided Diffusion Models
Figure 3 for GMD: Controllable Human Motion Synthesis via Guided Diffusion Models
Figure 4 for GMD: Controllable Human Motion Synthesis via Guided Diffusion Models
Viaarxiv icon

LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model

Apr 28, 2023
Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, Hongsheng Li, Yu Qiao

Figure 1 for LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
Figure 2 for LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
Figure 3 for LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
Figure 4 for LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
Viaarxiv icon

Self Information Update for Large Language Models through Mitigating Exposure Bias

May 29, 2023
Pengfei Yu, Heng Ji

Figure 1 for Self Information Update for Large Language Models through Mitigating Exposure Bias
Figure 2 for Self Information Update for Large Language Models through Mitigating Exposure Bias
Figure 3 for Self Information Update for Large Language Models through Mitigating Exposure Bias
Figure 4 for Self Information Update for Large Language Models through Mitigating Exposure Bias
Viaarxiv icon

GripRank: Bridging the Gap between Retrieval and Generation via the Generative Knowledge Improved Passage Ranking

May 29, 2023
Jiaqi Bai, Hongcheng Guo, Jiaheng Liu, Jian Yang, Xinnian Liang, Zhao Yan, Zhoujun Li

Figure 1 for GripRank: Bridging the Gap between Retrieval and Generation via the Generative Knowledge Improved Passage Ranking
Figure 2 for GripRank: Bridging the Gap between Retrieval and Generation via the Generative Knowledge Improved Passage Ranking
Figure 3 for GripRank: Bridging the Gap between Retrieval and Generation via the Generative Knowledge Improved Passage Ranking
Figure 4 for GripRank: Bridging the Gap between Retrieval and Generation via the Generative Knowledge Improved Passage Ranking
Viaarxiv icon

Exploiting Explainability to Design Adversarial Attacks and Evaluate Attack Resilience in Hate-Speech Detection Models

May 29, 2023
Pranath Reddy Kumbam, Sohaib Uddin Syed, Prashanth Thamminedi, Suhas Harish, Ian Perera, Bonnie J. Dorr

Figure 1 for Exploiting Explainability to Design Adversarial Attacks and Evaluate Attack Resilience in Hate-Speech Detection Models
Figure 2 for Exploiting Explainability to Design Adversarial Attacks and Evaluate Attack Resilience in Hate-Speech Detection Models
Figure 3 for Exploiting Explainability to Design Adversarial Attacks and Evaluate Attack Resilience in Hate-Speech Detection Models
Figure 4 for Exploiting Explainability to Design Adversarial Attacks and Evaluate Attack Resilience in Hate-Speech Detection Models
Viaarxiv icon