Picture for Weiqin Wang

Weiqin Wang

SemPA: Improving Sentence Embeddings of Large Language Models through Semantic Preference Alignment

Add code
Jan 08, 2026
Viaarxiv icon

Beyond Majority Voting: Towards Fine-grained and More Reliable Reward Signal for Test-Time Reinforcement Learning

Add code
Dec 18, 2025
Figure 1 for Beyond Majority Voting: Towards Fine-grained and More Reliable Reward Signal for Test-Time Reinforcement Learning
Figure 2 for Beyond Majority Voting: Towards Fine-grained and More Reliable Reward Signal for Test-Time Reinforcement Learning
Figure 3 for Beyond Majority Voting: Towards Fine-grained and More Reliable Reward Signal for Test-Time Reinforcement Learning
Figure 4 for Beyond Majority Voting: Towards Fine-grained and More Reliable Reward Signal for Test-Time Reinforcement Learning
Viaarxiv icon

Ranked Voting based Self-Consistency of Large Language Models

Add code
May 16, 2025
Viaarxiv icon

SkiM: Skipping Memory LSTM for Low-Latency Real-Time Continuous Speech Separation

Add code
Feb 10, 2022
Figure 1 for SkiM: Skipping Memory LSTM for Low-Latency Real-Time Continuous Speech Separation
Figure 2 for SkiM: Skipping Memory LSTM for Low-Latency Real-Time Continuous Speech Separation
Figure 3 for SkiM: Skipping Memory LSTM for Low-Latency Real-Time Continuous Speech Separation
Figure 4 for SkiM: Skipping Memory LSTM for Low-Latency Real-Time Continuous Speech Separation
Viaarxiv icon

Practical Benefits of Feature Feedback Under Distribution Shift

Add code
Oct 14, 2021
Figure 1 for Practical Benefits of Feature Feedback Under Distribution Shift
Figure 2 for Practical Benefits of Feature Feedback Under Distribution Shift
Figure 3 for Practical Benefits of Feature Feedback Under Distribution Shift
Figure 4 for Practical Benefits of Feature Feedback Under Distribution Shift
Viaarxiv icon