Picture for Zhiqian Li

Zhiqian Li

LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training

Add code
Oct 16, 2025
Figure 1 for LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training
Figure 2 for LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training
Figure 3 for LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training
Figure 4 for LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training
Viaarxiv icon

MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI

Add code
Apr 24, 2024
Figure 1 for MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI
Figure 2 for MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI
Figure 3 for MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI
Figure 4 for MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI
Viaarxiv icon

OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models

Add code
Aug 25, 2023
Figure 1 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Figure 2 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Figure 3 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Figure 4 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Viaarxiv icon