Picture for Wentao Ma

Wentao Ma

FlowBench: Revisiting and Benchmarking Workflow-Guided Planning for LLM-based Agents

Add code
Jun 21, 2024
Figure 1 for FlowBench: Revisiting and Benchmarking Workflow-Guided Planning for LLM-based Agents
Figure 2 for FlowBench: Revisiting and Benchmarking Workflow-Guided Planning for LLM-based Agents
Figure 3 for FlowBench: Revisiting and Benchmarking Workflow-Guided Planning for LLM-based Agents
Figure 4 for FlowBench: Revisiting and Benchmarking Workflow-Guided Planning for LLM-based Agents
Viaarxiv icon

PTA: Enhancing Multimodal Sentiment Analysis through Pipelined Prediction and Translation-based Alignment

Add code
May 23, 2024
Figure 1 for PTA: Enhancing Multimodal Sentiment Analysis through Pipelined Prediction and Translation-based Alignment
Figure 2 for PTA: Enhancing Multimodal Sentiment Analysis through Pipelined Prediction and Translation-based Alignment
Figure 3 for PTA: Enhancing Multimodal Sentiment Analysis through Pipelined Prediction and Translation-based Alignment
Figure 4 for PTA: Enhancing Multimodal Sentiment Analysis through Pipelined Prediction and Translation-based Alignment
Viaarxiv icon

Self-Explanation Prompting Improves Dialogue Understanding in Large Language Models

Add code
Sep 22, 2023
Figure 1 for Self-Explanation Prompting Improves Dialogue Understanding in Large Language Models
Figure 2 for Self-Explanation Prompting Improves Dialogue Understanding in Large Language Models
Figure 3 for Self-Explanation Prompting Improves Dialogue Understanding in Large Language Models
Figure 4 for Self-Explanation Prompting Improves Dialogue Understanding in Large Language Models
Viaarxiv icon

UniPCM: Universal Pre-trained Conversation Model with Task-aware Automatic Prompt

Add code
Sep 20, 2023
Figure 1 for UniPCM: Universal Pre-trained Conversation Model with Task-aware Automatic Prompt
Figure 2 for UniPCM: Universal Pre-trained Conversation Model with Task-aware Automatic Prompt
Figure 3 for UniPCM: Universal Pre-trained Conversation Model with Task-aware Automatic Prompt
Figure 4 for UniPCM: Universal Pre-trained Conversation Model with Task-aware Automatic Prompt
Viaarxiv icon

SpokenWOZ: A Large-Scale Speech-Text Benchmark for Spoken Task-Oriented Dialogue in Multiple Domains

Add code
May 22, 2023
Figure 1 for SpokenWOZ: A Large-Scale Speech-Text Benchmark for Spoken Task-Oriented Dialogue in Multiple Domains
Figure 2 for SpokenWOZ: A Large-Scale Speech-Text Benchmark for Spoken Task-Oriented Dialogue in Multiple Domains
Figure 3 for SpokenWOZ: A Large-Scale Speech-Text Benchmark for Spoken Task-Oriented Dialogue in Multiple Domains
Figure 4 for SpokenWOZ: A Large-Scale Speech-Text Benchmark for Spoken Task-Oriented Dialogue in Multiple Domains
Viaarxiv icon

Speech-Text Dialog Pre-training for Spoken Dialog Understanding with Explicit Cross-Modal Alignment

Add code
May 19, 2023
Figure 1 for Speech-Text Dialog Pre-training for Spoken Dialog Understanding with Explicit Cross-Modal Alignment
Figure 2 for Speech-Text Dialog Pre-training for Spoken Dialog Understanding with Explicit Cross-Modal Alignment
Figure 3 for Speech-Text Dialog Pre-training for Spoken Dialog Understanding with Explicit Cross-Modal Alignment
Figure 4 for Speech-Text Dialog Pre-training for Spoken Dialog Understanding with Explicit Cross-Modal Alignment
Viaarxiv icon

Gate Recurrent Unit Network based on Hilbert-Schmidt Independence Criterion for State-of-Health Estimation

Add code
Mar 16, 2023
Figure 1 for Gate Recurrent Unit Network based on Hilbert-Schmidt Independence Criterion for State-of-Health Estimation
Figure 2 for Gate Recurrent Unit Network based on Hilbert-Schmidt Independence Criterion for State-of-Health Estimation
Figure 3 for Gate Recurrent Unit Network based on Hilbert-Schmidt Independence Criterion for State-of-Health Estimation
Figure 4 for Gate Recurrent Unit Network based on Hilbert-Schmidt Independence Criterion for State-of-Health Estimation
Viaarxiv icon

Bilingual Alignment Pre-training for Zero-shot Cross-lingual Transfer

Add code
Jun 03, 2021
Figure 1 for Bilingual Alignment Pre-training for Zero-shot Cross-lingual Transfer
Figure 2 for Bilingual Alignment Pre-training for Zero-shot Cross-lingual Transfer
Figure 3 for Bilingual Alignment Pre-training for Zero-shot Cross-lingual Transfer
Figure 4 for Bilingual Alignment Pre-training for Zero-shot Cross-lingual Transfer
Viaarxiv icon

CharBERT: Character-aware Pre-trained Language Model

Add code
Nov 03, 2020
Figure 1 for CharBERT: Character-aware Pre-trained Language Model
Figure 2 for CharBERT: Character-aware Pre-trained Language Model
Figure 3 for CharBERT: Character-aware Pre-trained Language Model
Figure 4 for CharBERT: Character-aware Pre-trained Language Model
Viaarxiv icon

Benchmarking Robustness of Machine Reading Comprehension Models

Add code
Apr 29, 2020
Figure 1 for Benchmarking Robustness of Machine Reading Comprehension Models
Figure 2 for Benchmarking Robustness of Machine Reading Comprehension Models
Figure 3 for Benchmarking Robustness of Machine Reading Comprehension Models
Viaarxiv icon