Picture for Felix Yu

Felix Yu

Jay

Large Language Models are Interpretable Learners

Add code
Jun 25, 2024
Viaarxiv icon

Efficient Document Ranking with Learnable Late Interactions

Add code
Jun 25, 2024
Viaarxiv icon

Metric-aware LLM inference

Add code
Mar 07, 2024
Figure 1 for Metric-aware LLM inference
Figure 2 for Metric-aware LLM inference
Figure 3 for Metric-aware LLM inference
Figure 4 for Metric-aware LLM inference
Viaarxiv icon

ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent

Add code
Dec 15, 2023
Figure 1 for ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent
Figure 2 for ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent
Figure 3 for ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent
Figure 4 for ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent
Viaarxiv icon

SpecTr: Fast Speculative Decoding via Optimal Transport

Add code
Oct 23, 2023
Viaarxiv icon

Large Language Models with Controllable Working Memory

Add code
Nov 09, 2022
Figure 1 for Large Language Models with Controllable Working Memory
Figure 2 for Large Language Models with Controllable Working Memory
Figure 3 for Large Language Models with Controllable Working Memory
Figure 4 for Large Language Models with Controllable Working Memory
Viaarxiv icon

Preserving In-Context Learning ability in Large Language Model Fine-tuning

Add code
Nov 01, 2022
Figure 1 for Preserving In-Context Learning ability in Large Language Model Fine-tuning
Figure 2 for Preserving In-Context Learning ability in Large Language Model Fine-tuning
Figure 3 for Preserving In-Context Learning ability in Large Language Model Fine-tuning
Figure 4 for Preserving In-Context Learning ability in Large Language Model Fine-tuning
Viaarxiv icon

Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers

Add code
Oct 12, 2022
Figure 1 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 2 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 3 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 4 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Viaarxiv icon

FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning

Add code
Jul 20, 2022
Figure 1 for FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning
Figure 2 for FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning
Figure 3 for FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning
Figure 4 for FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning
Viaarxiv icon

Correlated quantization for distributed mean estimation and optimization

Add code
Mar 09, 2022
Figure 1 for Correlated quantization for distributed mean estimation and optimization
Figure 2 for Correlated quantization for distributed mean estimation and optimization
Figure 3 for Correlated quantization for distributed mean estimation and optimization
Figure 4 for Correlated quantization for distributed mean estimation and optimization
Viaarxiv icon