Picture for Jing Ma

Jing Ma

Cracking the Code of Juxtaposition: Can AI Models Understand the Humorous Contradictions

Add code
May 29, 2024
Figure 1 for Cracking the Code of Juxtaposition: Can AI Models Understand the Humorous Contradictions
Figure 2 for Cracking the Code of Juxtaposition: Can AI Models Understand the Humorous Contradictions
Figure 3 for Cracking the Code of Juxtaposition: Can AI Models Understand the Humorous Contradictions
Figure 4 for Cracking the Code of Juxtaposition: Can AI Models Understand the Humorous Contradictions
Viaarxiv icon

Explainable Fake News Detection With Large Language Model via Defense Among Competing Wisdom

Add code
May 06, 2024
Figure 1 for Explainable Fake News Detection With Large Language Model via Defense Among Competing Wisdom
Figure 2 for Explainable Fake News Detection With Large Language Model via Defense Among Competing Wisdom
Figure 3 for Explainable Fake News Detection With Large Language Model via Defense Among Competing Wisdom
Figure 4 for Explainable Fake News Detection With Large Language Model via Defense Among Competing Wisdom
Viaarxiv icon

CofiPara: A Coarse-to-fine Paradigm for Multimodal Sarcasm Target Identification with Large Multimodal Models

Add code
May 01, 2024
Viaarxiv icon

MMCode: Evaluating Multi-Modal Code Large Language Models with Visually Rich Programming Problems

Add code
Apr 15, 2024
Figure 1 for MMCode: Evaluating Multi-Modal Code Large Language Models with Visually Rich Programming Problems
Figure 2 for MMCode: Evaluating Multi-Modal Code Large Language Models with Visually Rich Programming Problems
Figure 3 for MMCode: Evaluating Multi-Modal Code Large Language Models with Visually Rich Programming Problems
Figure 4 for MMCode: Evaluating Multi-Modal Code Large Language Models with Visually Rich Programming Problems
Viaarxiv icon

AI WALKUP: A Computer-Vision Approach to Quantifying MDS-UPDRS in Parkinson's Disease

Add code
Apr 02, 2024
Viaarxiv icon

Towards Explainable Harmful Meme Detection through Multimodal Debate between Large Language Models

Add code
Jan 24, 2024
Figure 1 for Towards Explainable Harmful Meme Detection through Multimodal Debate between Large Language Models
Figure 2 for Towards Explainable Harmful Meme Detection through Multimodal Debate between Large Language Models
Figure 3 for Towards Explainable Harmful Meme Detection through Multimodal Debate between Large Language Models
Figure 4 for Towards Explainable Harmful Meme Detection through Multimodal Debate between Large Language Models
Viaarxiv icon

GOAT-Bench: Safety Insights to Large Multimodal Models through Meme-Based Social Abuse

Add code
Jan 07, 2024
Viaarxiv icon

Beneath the Surface: Unveiling Harmful Memes with Multimodal Reasoning Distilled from Large Language Models

Add code
Dec 09, 2023
Viaarxiv icon

WSDMS: Debunk Fake News via Weakly Supervised Detection of Misinforming Sentences with Contextualized Social Wisdom

Add code
Oct 25, 2023
Viaarxiv icon

Fair Few-shot Learning with Auxiliary Sets

Add code
Aug 28, 2023
Viaarxiv icon