Picture for James Cheng

James Cheng

MESH -- Understanding Videos Like Human: Measuring Hallucinations in Large Video Models

Add code
Sep 10, 2025
Viaarxiv icon

Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models

Add code
May 22, 2025
Viaarxiv icon

On the Thinking-Language Modeling Gap in Large Language Models

Add code
May 19, 2025
Viaarxiv icon

DivIL: Unveiling and Addressing Over-Invariance for Out-of- Distribution Generalization

Add code
Feb 18, 2025
Viaarxiv icon

Oversmoothing as Loss of Sign: Towards Structural Balance in Graph Neural Networks

Add code
Feb 17, 2025
Viaarxiv icon

BrainOOD: Out-of-distribution Generalizable Brain Network Analysis

Add code
Feb 02, 2025
Viaarxiv icon

HIGHT: Hierarchical Graph Tokenization for Graph-Language Alignment

Add code
Jun 20, 2024
Viaarxiv icon

How Interpretable Are Interpretable Graph Neural Networks?

Add code
Jun 12, 2024
Figure 1 for How Interpretable Are Interpretable Graph Neural Networks?
Figure 2 for How Interpretable Are Interpretable Graph Neural Networks?
Figure 3 for How Interpretable Are Interpretable Graph Neural Networks?
Figure 4 for How Interpretable Are Interpretable Graph Neural Networks?
Viaarxiv icon

Discovery of the Hidden World with Large Language Models

Add code
Feb 06, 2024
Viaarxiv icon

Enhancing Neural Subset Selection: Integrating Background Information into Set Representations

Add code
Feb 05, 2024
Viaarxiv icon