Picture for Silvio Savarese

Silvio Savarese

Asynchronous Tool Usage for Real-Time Agents

Add code
Oct 28, 2024
Figure 1 for Asynchronous Tool Usage for Real-Time Agents
Figure 2 for Asynchronous Tool Usage for Real-Time Agents
Figure 3 for Asynchronous Tool Usage for Real-Time Agents
Viaarxiv icon

PRACT: Optimizing Principled Reasoning and Acting of LLM Agent

Add code
Oct 24, 2024
Figure 1 for PRACT: Optimizing Principled Reasoning and Acting of LLM Agent
Figure 2 for PRACT: Optimizing Principled Reasoning and Acting of LLM Agent
Figure 3 for PRACT: Optimizing Principled Reasoning and Acting of LLM Agent
Figure 4 for PRACT: Optimizing Principled Reasoning and Acting of LLM Agent
Viaarxiv icon

xGen-MM-Vid (BLIP-3-Video): You Only Need 32 Tokens to Represent a Video Even in VLMs

Add code
Oct 21, 2024
Figure 1 for xGen-MM-Vid (BLIP-3-Video): You Only Need 32 Tokens to Represent a Video Even in VLMs
Figure 2 for xGen-MM-Vid (BLIP-3-Video): You Only Need 32 Tokens to Represent a Video Even in VLMs
Figure 3 for xGen-MM-Vid (BLIP-3-Video): You Only Need 32 Tokens to Represent a Video Even in VLMs
Figure 4 for xGen-MM-Vid (BLIP-3-Video): You Only Need 32 Tokens to Represent a Video Even in VLMs
Viaarxiv icon

GIFT-Eval: A Benchmark For General Time Series Forecasting Model Evaluation

Add code
Oct 14, 2024
Figure 1 for GIFT-Eval: A Benchmark For General Time Series Forecasting Model Evaluation
Figure 2 for GIFT-Eval: A Benchmark For General Time Series Forecasting Model Evaluation
Figure 3 for GIFT-Eval: A Benchmark For General Time Series Forecasting Model Evaluation
Figure 4 for GIFT-Eval: A Benchmark For General Time Series Forecasting Model Evaluation
Viaarxiv icon

Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts

Add code
Oct 14, 2024
Figure 1 for Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts
Figure 2 for Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts
Figure 3 for Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts
Figure 4 for Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts
Viaarxiv icon

SFR-RAG: Towards Contextually Faithful LLMs

Add code
Sep 16, 2024
Viaarxiv icon

xLAM: A Family of Large Action Models to Empower AI Agent Systems

Add code
Sep 05, 2024
Figure 1 for xLAM: A Family of Large Action Models to Empower AI Agent Systems
Figure 2 for xLAM: A Family of Large Action Models to Empower AI Agent Systems
Figure 3 for xLAM: A Family of Large Action Models to Empower AI Agent Systems
Figure 4 for xLAM: A Family of Large Action Models to Empower AI Agent Systems
Viaarxiv icon

xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations

Add code
Aug 22, 2024
Figure 1 for xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations
Figure 2 for xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations
Figure 3 for xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations
Figure 4 for xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations
Viaarxiv icon

xGen-MM (BLIP-3): A Family of Open Large Multimodal Models

Add code
Aug 16, 2024
Figure 1 for xGen-MM (BLIP-3): A Family of Open Large Multimodal Models
Figure 2 for xGen-MM (BLIP-3): A Family of Open Large Multimodal Models
Figure 3 for xGen-MM (BLIP-3): A Family of Open Large Multimodal Models
Figure 4 for xGen-MM (BLIP-3): A Family of Open Large Multimodal Models
Viaarxiv icon

Diversity Empowers Intelligence: Integrating Expertise of Software Engineering Agents

Add code
Aug 13, 2024
Viaarxiv icon