Picture for Dacheng Tao

Dacheng Tao

and Other Contributors

Revisiting Catastrophic Forgetting in Large Language Model Tuning

Add code
Jun 07, 2024
Figure 1 for Revisiting Catastrophic Forgetting in Large Language Model Tuning
Figure 2 for Revisiting Catastrophic Forgetting in Large Language Model Tuning
Figure 3 for Revisiting Catastrophic Forgetting in Large Language Model Tuning
Figure 4 for Revisiting Catastrophic Forgetting in Large Language Model Tuning
Viaarxiv icon

Jailbreak Vision Language Models via Bi-Modal Adversarial Prompt

Add code
Jun 06, 2024
Viaarxiv icon

LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions

Add code
Jun 04, 2024
Figure 1 for LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions
Figure 2 for LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions
Figure 3 for LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions
Figure 4 for LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions
Viaarxiv icon

A Comprehensive Survey on Underwater Image Enhancement Based on Deep Learning

Add code
May 30, 2024
Viaarxiv icon

HarmoDT: Harmony Multi-Task Decision Transformer for Offline Reinforcement Learning

Add code
May 28, 2024
Viaarxiv icon

Q-value Regularized Transformer for Offline Reinforcement Learning

Add code
May 27, 2024
Viaarxiv icon

Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models

Add code
May 26, 2024
Figure 1 for Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models
Figure 2 for Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models
Figure 3 for Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models
Figure 4 for Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models
Viaarxiv icon

Learning Multi-Agent Communication from Graph Modeling Perspective

Add code
May 14, 2024
Viaarxiv icon

Separable Power of Classical and Quantum Learning Protocols Through the Lens of No-Free-Lunch Theorem

Add code
May 12, 2024
Figure 1 for Separable Power of Classical and Quantum Learning Protocols Through the Lens of No-Free-Lunch Theorem
Figure 2 for Separable Power of Classical and Quantum Learning Protocols Through the Lens of No-Free-Lunch Theorem
Figure 3 for Separable Power of Classical and Quantum Learning Protocols Through the Lens of No-Free-Lunch Theorem
Figure 4 for Separable Power of Classical and Quantum Learning Protocols Through the Lens of No-Free-Lunch Theorem
Viaarxiv icon

LLM-QBench: A Benchmark Towards the Best Practice for Post-training Quantization of Large Language Models

Add code
May 09, 2024
Figure 1 for LLM-QBench: A Benchmark Towards the Best Practice for Post-training Quantization of Large Language Models
Figure 2 for LLM-QBench: A Benchmark Towards the Best Practice for Post-training Quantization of Large Language Models
Figure 3 for LLM-QBench: A Benchmark Towards the Best Practice for Post-training Quantization of Large Language Models
Figure 4 for LLM-QBench: A Benchmark Towards the Best Practice for Post-training Quantization of Large Language Models
Viaarxiv icon