Picture for Zonglin Li

Zonglin Li

Gemini: A Family of Highly Capable Multimodal Models

Add code
Dec 19, 2023
Viaarxiv icon

ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent

Add code
Dec 15, 2023
Figure 1 for ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent
Figure 2 for ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent
Figure 3 for ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent
Figure 4 for ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent
Viaarxiv icon

ResMem: Learn what you can and memorize the rest

Add code
Feb 03, 2023
Figure 1 for ResMem: Learn what you can and memorize the rest
Figure 2 for ResMem: Learn what you can and memorize the rest
Figure 3 for ResMem: Learn what you can and memorize the rest
Figure 4 for ResMem: Learn what you can and memorize the rest
Viaarxiv icon

Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers

Add code
Oct 12, 2022
Figure 1 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 2 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 3 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 4 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Viaarxiv icon

Decoupled Context Processing for Context Augmented Language Modeling

Add code
Oct 11, 2022
Figure 1 for Decoupled Context Processing for Context Augmented Language Modeling
Figure 2 for Decoupled Context Processing for Context Augmented Language Modeling
Figure 3 for Decoupled Context Processing for Context Augmented Language Modeling
Figure 4 for Decoupled Context Processing for Context Augmented Language Modeling
Viaarxiv icon