Picture for Dacheng Tao

Dacheng Tao

JD Explore Academy, JD.com, China

CFinBench: A Comprehensive Chinese Financial Benchmark for Large Language Models

Add code
Jul 02, 2024
Figure 1 for CFinBench: A Comprehensive Chinese Financial Benchmark for Large Language Models
Figure 2 for CFinBench: A Comprehensive Chinese Financial Benchmark for Large Language Models
Figure 3 for CFinBench: A Comprehensive Chinese Financial Benchmark for Large Language Models
Figure 4 for CFinBench: A Comprehensive Chinese Financial Benchmark for Large Language Models
Viaarxiv icon

Learning System Dynamics without Forgetting

Add code
Jun 30, 2024
Viaarxiv icon

GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing

Add code
Jun 30, 2024
Viaarxiv icon

Iterative Data Augmentation with Large Language Models for Aspect-based Sentiment Analysis

Add code
Jun 29, 2024
Viaarxiv icon

Diffusion Model-Based Video Editing: A Survey

Add code
Jun 26, 2024
Viaarxiv icon

PoseBench: Benchmarking the Robustness of Pose Estimation Models under Corruptions

Add code
Jun 20, 2024
Viaarxiv icon

A Survey of Multimodal-Guided Image Editing with Text-to-Image Diffusion Models

Add code
Jun 20, 2024
Viaarxiv icon

HyperSIGMA: Hyperspectral Intelligence Comprehension Foundation Model

Add code
Jun 17, 2024
Figure 1 for HyperSIGMA: Hyperspectral Intelligence Comprehension Foundation Model
Figure 2 for HyperSIGMA: Hyperspectral Intelligence Comprehension Foundation Model
Figure 3 for HyperSIGMA: Hyperspectral Intelligence Comprehension Foundation Model
Figure 4 for HyperSIGMA: Hyperspectral Intelligence Comprehension Foundation Model
Viaarxiv icon

Aligning Large Language Models from Self-Reference AI Feedback with one General Principle

Add code
Jun 17, 2024
Figure 1 for Aligning Large Language Models from Self-Reference AI Feedback with one General Principle
Figure 2 for Aligning Large Language Models from Self-Reference AI Feedback with one General Principle
Figure 3 for Aligning Large Language Models from Self-Reference AI Feedback with one General Principle
Figure 4 for Aligning Large Language Models from Self-Reference AI Feedback with one General Principle
Viaarxiv icon

Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks

Add code
Jun 10, 2024
Figure 1 for Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks
Figure 2 for Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks
Figure 3 for Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks
Figure 4 for Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks
Viaarxiv icon