Picture for Thierry Tambe

Thierry Tambe

Vision-Language Alignment from Compressed Image Representations using 2D Gaussian Splatting

Add code
Sep 26, 2025
Viaarxiv icon

Token Sequence Compression for Efficient Multimodal Computing

Add code
Apr 24, 2025
Viaarxiv icon

BlockDialect: Block-wise Fine-grained Mixed Format for Energy-Efficient LLM Inference

Add code
Jan 03, 2025
Figure 1 for BlockDialect: Block-wise Fine-grained Mixed Format for Energy-Efficient LLM Inference
Figure 2 for BlockDialect: Block-wise Fine-grained Mixed Format for Energy-Efficient LLM Inference
Figure 3 for BlockDialect: Block-wise Fine-grained Mixed Format for Energy-Efficient LLM Inference
Figure 4 for BlockDialect: Block-wise Fine-grained Mixed Format for Energy-Efficient LLM Inference
Viaarxiv icon

VaPr: Variable-Precision Tensors to Accelerate Robot Motion Planning

Add code
Oct 11, 2023
Figure 1 for VaPr: Variable-Precision Tensors to Accelerate Robot Motion Planning
Figure 2 for VaPr: Variable-Precision Tensors to Accelerate Robot Motion Planning
Figure 3 for VaPr: Variable-Precision Tensors to Accelerate Robot Motion Planning
Figure 4 for VaPr: Variable-Precision Tensors to Accelerate Robot Motion Planning
Viaarxiv icon

CAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device Learning

Add code
May 04, 2023
Figure 1 for CAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device Learning
Figure 2 for CAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device Learning
Figure 3 for CAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device Learning
Figure 4 for CAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device Learning
Viaarxiv icon

AutoSoC: Automating Algorithm-SOC Co-design for Aerial Robots

Add code
Sep 13, 2021
Figure 1 for AutoSoC: Automating Algorithm-SOC Co-design for Aerial Robots
Figure 2 for AutoSoC: Automating Algorithm-SOC Co-design for Aerial Robots
Figure 3 for AutoSoC: Automating Algorithm-SOC Co-design for Aerial Robots
Figure 4 for AutoSoC: Automating Algorithm-SOC Co-design for Aerial Robots
Viaarxiv icon

Quantifying and Maximizing the Benefits of Back-End Noise Adaption on Attention-Based Speech Recognition Models

Add code
May 03, 2021
Figure 1 for Quantifying and Maximizing the Benefits of Back-End Noise Adaption on Attention-Based Speech Recognition Models
Figure 2 for Quantifying and Maximizing the Benefits of Back-End Noise Adaption on Attention-Based Speech Recognition Models
Figure 3 for Quantifying and Maximizing the Benefits of Back-End Noise Adaption on Attention-Based Speech Recognition Models
Figure 4 for Quantifying and Maximizing the Benefits of Back-End Noise Adaption on Attention-Based Speech Recognition Models
Viaarxiv icon

EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP

Add code
Dec 01, 2020
Figure 1 for EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP
Figure 2 for EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP
Figure 3 for EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP
Figure 4 for EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP
Viaarxiv icon

AdaptivFloat: A Floating-point based Data Type for Resilient Deep Learning Inference

Add code
Oct 15, 2019
Figure 1 for AdaptivFloat: A Floating-point based Data Type for Resilient Deep Learning Inference
Figure 2 for AdaptivFloat: A Floating-point based Data Type for Resilient Deep Learning Inference
Figure 3 for AdaptivFloat: A Floating-point based Data Type for Resilient Deep Learning Inference
Figure 4 for AdaptivFloat: A Floating-point based Data Type for Resilient Deep Learning Inference
Viaarxiv icon