Picture for Sheng Wang

Sheng Wang

FM-TS: Flow Matching for Time Series Generation

Add code
Nov 12, 2024
Viaarxiv icon

Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration

Add code
Oct 22, 2024
Figure 1 for Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration
Figure 2 for Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration
Figure 3 for Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration
Figure 4 for Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration
Viaarxiv icon

ProReason: Multi-Modal Proactive Reasoning with Decoupled Eyesight and Wisdom

Add code
Oct 18, 2024
Figure 1 for ProReason: Multi-Modal Proactive Reasoning with Decoupled Eyesight and Wisdom
Figure 2 for ProReason: Multi-Modal Proactive Reasoning with Decoupled Eyesight and Wisdom
Figure 3 for ProReason: Multi-Modal Proactive Reasoning with Decoupled Eyesight and Wisdom
Figure 4 for ProReason: Multi-Modal Proactive Reasoning with Decoupled Eyesight and Wisdom
Viaarxiv icon

MlingConf: A Comprehensive Study of Multilingual Confidence Estimation on Large Language Models

Add code
Oct 16, 2024
Figure 1 for MlingConf: A Comprehensive Study of Multilingual Confidence Estimation on Large Language Models
Figure 2 for MlingConf: A Comprehensive Study of Multilingual Confidence Estimation on Large Language Models
Figure 3 for MlingConf: A Comprehensive Study of Multilingual Confidence Estimation on Large Language Models
Figure 4 for MlingConf: A Comprehensive Study of Multilingual Confidence Estimation on Large Language Models
Viaarxiv icon

MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language Models

Add code
Oct 16, 2024
Viaarxiv icon

Unleashing the Power of LLMs as Multi-Modal Encoders for Text and Graph-Structured Data

Add code
Oct 15, 2024
Figure 1 for Unleashing the Power of LLMs as Multi-Modal Encoders for Text and Graph-Structured Data
Figure 2 for Unleashing the Power of LLMs as Multi-Modal Encoders for Text and Graph-Structured Data
Figure 3 for Unleashing the Power of LLMs as Multi-Modal Encoders for Text and Graph-Structured Data
Figure 4 for Unleashing the Power of LLMs as Multi-Modal Encoders for Text and Graph-Structured Data
Viaarxiv icon

QSpec: Speculative Decoding with Complementary Quantization Schemes

Add code
Oct 15, 2024
Figure 1 for QSpec: Speculative Decoding with Complementary Quantization Schemes
Figure 2 for QSpec: Speculative Decoding with Complementary Quantization Schemes
Figure 3 for QSpec: Speculative Decoding with Complementary Quantization Schemes
Figure 4 for QSpec: Speculative Decoding with Complementary Quantization Schemes
Viaarxiv icon

MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards

Add code
Oct 01, 2024
Figure 1 for MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards
Figure 2 for MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards
Figure 3 for MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards
Figure 4 for MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards
Viaarxiv icon

How Far Can Cantonese NLP Go? Benchmarking Cantonese Capabilities of Large Language Models

Add code
Aug 29, 2024
Viaarxiv icon

OCTCube: A 3D foundation model for optical coherence tomography that improves cross-dataset, cross-disease, cross-device and cross-modality analysis

Add code
Aug 20, 2024
Viaarxiv icon