Picture for Zhaofeng Wu

Zhaofeng Wu

Reuse Your Rewards: Reward Model Transfer for Zero-Shot Cross-Lingual Alignment

Add code
Apr 18, 2024
Viaarxiv icon

A Taxonomy of Ambiguity Types for NLP

Add code
Mar 21, 2024
Figure 1 for A Taxonomy of Ambiguity Types for NLP
Viaarxiv icon

Can You Learn Semantics Through Next-Word Prediction? The Case of Entailment

Add code
Feb 29, 2024
Viaarxiv icon

Universal Deoxidation of Semiconductor Substrates Assisted by Machine-Learning and Real-Time-Feedback-Control

Add code
Dec 04, 2023
Viaarxiv icon

Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks

Add code
Aug 01, 2023
Figure 1 for Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks
Figure 2 for Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks
Figure 3 for Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks
Figure 4 for Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks
Viaarxiv icon

Machine-Learning-Assisted and Real-Time-Feedback-Controlled Growth of InAs/GaAs Quantum Dots

Add code
Jul 07, 2023
Figure 1 for Machine-Learning-Assisted and Real-Time-Feedback-Controlled Growth of InAs/GaAs Quantum Dots
Figure 2 for Machine-Learning-Assisted and Real-Time-Feedback-Controlled Growth of InAs/GaAs Quantum Dots
Figure 3 for Machine-Learning-Assisted and Real-Time-Feedback-Controlled Growth of InAs/GaAs Quantum Dots
Figure 4 for Machine-Learning-Assisted and Real-Time-Feedback-Controlled Growth of InAs/GaAs Quantum Dots
Viaarxiv icon

We're Afraid Language Models Aren't Modeling Ambiguity

Add code
Apr 27, 2023
Figure 1 for We're Afraid Language Models Aren't Modeling Ambiguity
Figure 2 for We're Afraid Language Models Aren't Modeling Ambiguity
Figure 3 for We're Afraid Language Models Aren't Modeling Ambiguity
Figure 4 for We're Afraid Language Models Aren't Modeling Ambiguity
Viaarxiv icon

Continued Pretraining for Better Zero- and Few-Shot Promptability

Add code
Oct 19, 2022
Figure 1 for Continued Pretraining for Better Zero- and Few-Shot Promptability
Figure 2 for Continued Pretraining for Better Zero- and Few-Shot Promptability
Figure 3 for Continued Pretraining for Better Zero- and Few-Shot Promptability
Figure 4 for Continued Pretraining for Better Zero- and Few-Shot Promptability
Viaarxiv icon

Modeling Context With Linear Attention for Scalable Document-Level Translation

Add code
Oct 16, 2022
Figure 1 for Modeling Context With Linear Attention for Scalable Document-Level Translation
Figure 2 for Modeling Context With Linear Attention for Scalable Document-Level Translation
Figure 3 for Modeling Context With Linear Attention for Scalable Document-Level Translation
Figure 4 for Modeling Context With Linear Attention for Scalable Document-Level Translation
Viaarxiv icon

Transparency Helps Reveal When Language Models Learn Meaning

Add code
Oct 14, 2022
Figure 1 for Transparency Helps Reveal When Language Models Learn Meaning
Figure 2 for Transparency Helps Reveal When Language Models Learn Meaning
Figure 3 for Transparency Helps Reveal When Language Models Learn Meaning
Figure 4 for Transparency Helps Reveal When Language Models Learn Meaning
Viaarxiv icon