Picture for Simiao Zuo

Simiao Zuo

Towards Consistent Natural-Language Explanations via Explanation-Consistency Finetuning

Add code
Jan 25, 2024
Viaarxiv icon

SMURF-THP: Score Matching-based UnceRtainty quantiFication for Transformer Hawkes Process

Add code
Oct 25, 2023
Viaarxiv icon

Evoke: Evoking Critical Thinking Abilities in LLMs via Reviewer-Author Prompt Editing

Oct 20, 2023
Viaarxiv icon

Robust Multi-Agent Reinforcement Learning via Adversarial Regularization: Theoretical Foundation and Stable Algorithms

Add code
Oct 16, 2023
Viaarxiv icon

DeepTagger: Knowledge Enhanced Named Entity Recognition for Web-Based Ads Queries

Jun 30, 2023
Figure 1 for DeepTagger: Knowledge Enhanced Named Entity Recognition for Web-Based Ads Queries
Figure 2 for DeepTagger: Knowledge Enhanced Named Entity Recognition for Web-Based Ads Queries
Figure 3 for DeepTagger: Knowledge Enhanced Named Entity Recognition for Web-Based Ads Queries
Figure 4 for DeepTagger: Knowledge Enhanced Named Entity Recognition for Web-Based Ads Queries
Viaarxiv icon

Machine Learning Force Fields with Data Cost Aware Training

Add code
Jun 05, 2023
Figure 1 for Machine Learning Force Fields with Data Cost Aware Training
Figure 2 for Machine Learning Force Fields with Data Cost Aware Training
Figure 3 for Machine Learning Force Fields with Data Cost Aware Training
Figure 4 for Machine Learning Force Fields with Data Cost Aware Training
Viaarxiv icon

Efficient Long Sequence Modeling via State Space Augmented Transformer

Add code
Dec 15, 2022
Figure 1 for Efficient Long Sequence Modeling via State Space Augmented Transformer
Figure 2 for Efficient Long Sequence Modeling via State Space Augmented Transformer
Figure 3 for Efficient Long Sequence Modeling via State Space Augmented Transformer
Figure 4 for Efficient Long Sequence Modeling via State Space Augmented Transformer
Viaarxiv icon

Less is More: Task-aware Layer-wise Distillation for Language Model Compression

Add code
Oct 05, 2022
Figure 1 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Figure 2 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Figure 3 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Figure 4 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Viaarxiv icon

Context-Aware Query Rewriting for Improving Users' Search Experience on E-commerce Websites

Add code
Sep 24, 2022
Figure 1 for Context-Aware Query Rewriting for Improving Users' Search Experience on E-commerce Websites
Figure 2 for Context-Aware Query Rewriting for Improving Users' Search Experience on E-commerce Websites
Figure 3 for Context-Aware Query Rewriting for Improving Users' Search Experience on E-commerce Websites
Figure 4 for Context-Aware Query Rewriting for Improving Users' Search Experience on E-commerce Websites
Viaarxiv icon

DiP-GNN: Discriminative Pre-Training of Graph Neural Networks

Sep 15, 2022
Figure 1 for DiP-GNN: Discriminative Pre-Training of Graph Neural Networks
Figure 2 for DiP-GNN: Discriminative Pre-Training of Graph Neural Networks
Figure 3 for DiP-GNN: Discriminative Pre-Training of Graph Neural Networks
Figure 4 for DiP-GNN: Discriminative Pre-Training of Graph Neural Networks
Viaarxiv icon