Picture for Rui Yan

Rui Yan

Unlocking Decoding-time Controllability: Gradient-Free Multi-Objective Alignment with Contrastive Prompts

Add code
Aug 09, 2024
Figure 1 for Unlocking Decoding-time Controllability: Gradient-Free Multi-Objective Alignment with Contrastive Prompts
Figure 2 for Unlocking Decoding-time Controllability: Gradient-Free Multi-Objective Alignment with Contrastive Prompts
Figure 3 for Unlocking Decoding-time Controllability: Gradient-Free Multi-Objective Alignment with Contrastive Prompts
Figure 4 for Unlocking Decoding-time Controllability: Gradient-Free Multi-Objective Alignment with Contrastive Prompts
Viaarxiv icon

Towards Effective and Efficient Continual Pre-training of Large Language Models

Add code
Jul 26, 2024
Figure 1 for Towards Effective and Efficient Continual Pre-training of Large Language Models
Figure 2 for Towards Effective and Efficient Continual Pre-training of Large Language Models
Figure 3 for Towards Effective and Efficient Continual Pre-training of Large Language Models
Figure 4 for Towards Effective and Efficient Continual Pre-training of Large Language Models
Viaarxiv icon

Graph-Structured Speculative Decoding

Add code
Jul 23, 2024
Figure 1 for Graph-Structured Speculative Decoding
Figure 2 for Graph-Structured Speculative Decoding
Figure 3 for Graph-Structured Speculative Decoding
Figure 4 for Graph-Structured Speculative Decoding
Viaarxiv icon

Exploiting Pre-trained Models for Drug Target Affinity Prediction with Nearest Neighbors

Add code
Jul 21, 2024
Figure 1 for Exploiting Pre-trained Models for Drug Target Affinity Prediction with Nearest Neighbors
Figure 2 for Exploiting Pre-trained Models for Drug Target Affinity Prediction with Nearest Neighbors
Figure 3 for Exploiting Pre-trained Models for Drug Target Affinity Prediction with Nearest Neighbors
Figure 4 for Exploiting Pre-trained Models for Drug Target Affinity Prediction with Nearest Neighbors
Viaarxiv icon

Mixture-of-Modules: Reinventing Transformers as Dynamic Assemblies of Modules

Add code
Jul 09, 2024
Figure 1 for Mixture-of-Modules: Reinventing Transformers as Dynamic Assemblies of Modules
Figure 2 for Mixture-of-Modules: Reinventing Transformers as Dynamic Assemblies of Modules
Figure 3 for Mixture-of-Modules: Reinventing Transformers as Dynamic Assemblies of Modules
Figure 4 for Mixture-of-Modules: Reinventing Transformers as Dynamic Assemblies of Modules
Viaarxiv icon

Mobile-Bench: An Evaluation Benchmark for LLM-based Mobile Agents

Add code
Jul 01, 2024
Figure 1 for Mobile-Bench: An Evaluation Benchmark for LLM-based Mobile Agents
Figure 2 for Mobile-Bench: An Evaluation Benchmark for LLM-based Mobile Agents
Figure 3 for Mobile-Bench: An Evaluation Benchmark for LLM-based Mobile Agents
Figure 4 for Mobile-Bench: An Evaluation Benchmark for LLM-based Mobile Agents
Viaarxiv icon

YuLan: An Open-source Large Language Model

Add code
Jun 28, 2024
Figure 1 for YuLan: An Open-source Large Language Model
Figure 2 for YuLan: An Open-source Large Language Model
Figure 3 for YuLan: An Open-source Large Language Model
Figure 4 for YuLan: An Open-source Large Language Model
Viaarxiv icon

Mixture of In-Context Experts Enhance LLMs' Long Context Awareness

Add code
Jun 28, 2024
Figure 1 for Mixture of In-Context Experts Enhance LLMs' Long Context Awareness
Figure 2 for Mixture of In-Context Experts Enhance LLMs' Long Context Awareness
Figure 3 for Mixture of In-Context Experts Enhance LLMs' Long Context Awareness
Figure 4 for Mixture of In-Context Experts Enhance LLMs' Long Context Awareness
Viaarxiv icon

From the Least to the Most: Building a Plug-and-Play Visual Reasoner via Data Synthesis

Add code
Jun 28, 2024
Viaarxiv icon

3D-MolT5: Towards Unified 3D Molecule-Text Modeling with 3D Molecular Tokenization

Add code
Jun 09, 2024
Figure 1 for 3D-MolT5: Towards Unified 3D Molecule-Text Modeling with 3D Molecular Tokenization
Figure 2 for 3D-MolT5: Towards Unified 3D Molecule-Text Modeling with 3D Molecular Tokenization
Figure 3 for 3D-MolT5: Towards Unified 3D Molecule-Text Modeling with 3D Molecular Tokenization
Figure 4 for 3D-MolT5: Towards Unified 3D Molecule-Text Modeling with 3D Molecular Tokenization
Viaarxiv icon