Picture for Jingqi Tong

Jingqi Tong

OMIBench: Benchmarking Olympiad-Level Multi-Image Reasoning in Large Vision-Language Model

Add code
Apr 22, 2026
Viaarxiv icon

AI Can Learn Scientific Taste

Add code
Mar 15, 2026
Viaarxiv icon

SciAgentGym: Benchmarking Multi-Step Scientific Tool-use in LLM Agents

Add code
Feb 13, 2026
Viaarxiv icon

MOVA: Towards Scalable and Synchronized Video-Audio Generation

Add code
Feb 09, 2026
Viaarxiv icon

OpenNovelty: An LLM-powered Agentic System for Verifiable Scholarly Novelty Assessment

Add code
Jan 04, 2026
Viaarxiv icon

Thinking with Video: Video Generation as a Promising Multimodal Reasoning Paradigm

Add code
Nov 06, 2025
Figure 1 for Thinking with Video: Video Generation as a Promising Multimodal Reasoning Paradigm
Figure 2 for Thinking with Video: Video Generation as a Promising Multimodal Reasoning Paradigm
Figure 3 for Thinking with Video: Video Generation as a Promising Multimodal Reasoning Paradigm
Figure 4 for Thinking with Video: Video Generation as a Promising Multimodal Reasoning Paradigm
Viaarxiv icon

LLMEval-3: A Large-Scale Longitudinal Study on Robust and Fair Evaluation of Large Language Models

Add code
Aug 07, 2025
Viaarxiv icon

LLMEval-Med: A Real-world Clinical Benchmark for Medical LLMs with Physician Validation

Add code
Jun 04, 2025
Figure 1 for LLMEval-Med: A Real-world Clinical Benchmark for Medical LLMs with Physician Validation
Figure 2 for LLMEval-Med: A Real-world Clinical Benchmark for Medical LLMs with Physician Validation
Figure 3 for LLMEval-Med: A Real-world Clinical Benchmark for Medical LLMs with Physician Validation
Figure 4 for LLMEval-Med: A Real-world Clinical Benchmark for Medical LLMs with Physician Validation
Viaarxiv icon

Code2Logic: Game-Code-Driven Data Synthesis for Enhancing VLMs General Reasoning

Add code
May 20, 2025
Viaarxiv icon

Predicting Large Language Model Capabilities on Closed-Book QA Tasks Using Only Information Available Prior to Training

Add code
Feb 06, 2025
Viaarxiv icon