Picture for Wei Chen

Wei Chen

Soochow University

Seed-ASR: Understanding Diverse Speech and Contexts with LLM-based Speech Recognition

Add code
Jul 05, 2024
Figure 1 for Seed-ASR: Understanding Diverse Speech and Contexts with LLM-based Speech Recognition
Figure 2 for Seed-ASR: Understanding Diverse Speech and Contexts with LLM-based Speech Recognition
Figure 3 for Seed-ASR: Understanding Diverse Speech and Contexts with LLM-based Speech Recognition
Figure 4 for Seed-ASR: Understanding Diverse Speech and Contexts with LLM-based Speech Recognition
Viaarxiv icon

CURLS: Causal Rule Learning for Subgroups with Significant Treatment Effect

Add code
Jul 01, 2024
Figure 1 for CURLS: Causal Rule Learning for Subgroups with Significant Treatment Effect
Figure 2 for CURLS: Causal Rule Learning for Subgroups with Significant Treatment Effect
Figure 3 for CURLS: Causal Rule Learning for Subgroups with Significant Treatment Effect
Figure 4 for CURLS: Causal Rule Learning for Subgroups with Significant Treatment Effect
Viaarxiv icon

Octo-planner: On-device Language Model for Planner-Action Agents

Add code
Jun 26, 2024
Figure 1 for Octo-planner: On-device Language Model for Planner-Action Agents
Figure 2 for Octo-planner: On-device Language Model for Planner-Action Agents
Figure 3 for Octo-planner: On-device Language Model for Planner-Action Agents
Figure 4 for Octo-planner: On-device Language Model for Planner-Action Agents
Viaarxiv icon

Evaluating Implicit Bias in Large Language Models by Attacking From a Psychometric Perspective

Add code
Jun 20, 2024
Viaarxiv icon

Technique Report of CVPR 2024 PBDL Challenges

Add code
Jun 15, 2024
Figure 1 for Technique Report of CVPR 2024 PBDL Challenges
Figure 2 for Technique Report of CVPR 2024 PBDL Challenges
Figure 3 for Technique Report of CVPR 2024 PBDL Challenges
Figure 4 for Technique Report of CVPR 2024 PBDL Challenges
Viaarxiv icon

CoMM: A Coherent Interleaved Image-Text Dataset for Multimodal Understanding and Generation

Add code
Jun 15, 2024
Figure 1 for CoMM: A Coherent Interleaved Image-Text Dataset for Multimodal Understanding and Generation
Figure 2 for CoMM: A Coherent Interleaved Image-Text Dataset for Multimodal Understanding and Generation
Figure 3 for CoMM: A Coherent Interleaved Image-Text Dataset for Multimodal Understanding and Generation
Figure 4 for CoMM: A Coherent Interleaved Image-Text Dataset for Multimodal Understanding and Generation
Viaarxiv icon

Common and Rare Fundus Diseases Identification Using Vision-Language Foundation Model with Knowledge of Over 400 Diseases

Add code
Jun 13, 2024
Figure 1 for Common and Rare Fundus Diseases Identification Using Vision-Language Foundation Model with Knowledge of Over 400 Diseases
Figure 2 for Common and Rare Fundus Diseases Identification Using Vision-Language Foundation Model with Knowledge of Over 400 Diseases
Figure 3 for Common and Rare Fundus Diseases Identification Using Vision-Language Foundation Model with Knowledge of Over 400 Diseases
Figure 4 for Common and Rare Fundus Diseases Identification Using Vision-Language Foundation Model with Knowledge of Over 400 Diseases
Viaarxiv icon

MobileAgentBench: An Efficient and User-Friendly Benchmark for Mobile LLM Agents

Add code
Jun 12, 2024
Figure 1 for MobileAgentBench: An Efficient and User-Friendly Benchmark for Mobile LLM Agents
Figure 2 for MobileAgentBench: An Efficient and User-Friendly Benchmark for Mobile LLM Agents
Figure 3 for MobileAgentBench: An Efficient and User-Friendly Benchmark for Mobile LLM Agents
Figure 4 for MobileAgentBench: An Efficient and User-Friendly Benchmark for Mobile LLM Agents
Viaarxiv icon

CLDTA: Contrastive Learning based on Diagonal Transformer Autoencoder for Cross-Dataset EEG Emotion Recognition

Add code
Jun 12, 2024
Figure 1 for CLDTA: Contrastive Learning based on Diagonal Transformer Autoencoder for Cross-Dataset EEG Emotion Recognition
Figure 2 for CLDTA: Contrastive Learning based on Diagonal Transformer Autoencoder for Cross-Dataset EEG Emotion Recognition
Figure 3 for CLDTA: Contrastive Learning based on Diagonal Transformer Autoencoder for Cross-Dataset EEG Emotion Recognition
Figure 4 for CLDTA: Contrastive Learning based on Diagonal Transformer Autoencoder for Cross-Dataset EEG Emotion Recognition
Viaarxiv icon

VulDetectBench: Evaluating the Deep Capability of Vulnerability Detection with Large Language Models

Add code
Jun 11, 2024
Figure 1 for VulDetectBench: Evaluating the Deep Capability of Vulnerability Detection with Large Language Models
Figure 2 for VulDetectBench: Evaluating the Deep Capability of Vulnerability Detection with Large Language Models
Figure 3 for VulDetectBench: Evaluating the Deep Capability of Vulnerability Detection with Large Language Models
Figure 4 for VulDetectBench: Evaluating the Deep Capability of Vulnerability Detection with Large Language Models
Viaarxiv icon