Picture for Thierry Tambe

Thierry Tambe

SemanticDialect: Semantic-Aware Mixed-Format Quantization for Video Diffusion Transformers

Add code
Mar 03, 2026
Viaarxiv icon

LLM-FSM: Scaling Large Language Models for Finite-State Reasoning in RTL Code Generation

Add code
Feb 03, 2026
Viaarxiv icon

P3-LLM: An Integrated NPU-PIM Accelerator for LLM Inference Using Hybrid Numerical Formats

Add code
Nov 16, 2025
Figure 1 for P3-LLM: An Integrated NPU-PIM Accelerator for LLM Inference Using Hybrid Numerical Formats
Figure 2 for P3-LLM: An Integrated NPU-PIM Accelerator for LLM Inference Using Hybrid Numerical Formats
Figure 3 for P3-LLM: An Integrated NPU-PIM Accelerator for LLM Inference Using Hybrid Numerical Formats
Figure 4 for P3-LLM: An Integrated NPU-PIM Accelerator for LLM Inference Using Hybrid Numerical Formats
Viaarxiv icon

Vision-Language Alignment from Compressed Image Representations using 2D Gaussian Splatting

Add code
Sep 26, 2025
Viaarxiv icon

Token Sequence Compression for Efficient Multimodal Computing

Add code
Apr 24, 2025
Figure 1 for Token Sequence Compression for Efficient Multimodal Computing
Figure 2 for Token Sequence Compression for Efficient Multimodal Computing
Figure 3 for Token Sequence Compression for Efficient Multimodal Computing
Figure 4 for Token Sequence Compression for Efficient Multimodal Computing
Viaarxiv icon

BlockDialect: Block-wise Fine-grained Mixed Format for Energy-Efficient LLM Inference

Add code
Jan 03, 2025
Figure 1 for BlockDialect: Block-wise Fine-grained Mixed Format for Energy-Efficient LLM Inference
Figure 2 for BlockDialect: Block-wise Fine-grained Mixed Format for Energy-Efficient LLM Inference
Figure 3 for BlockDialect: Block-wise Fine-grained Mixed Format for Energy-Efficient LLM Inference
Figure 4 for BlockDialect: Block-wise Fine-grained Mixed Format for Energy-Efficient LLM Inference
Viaarxiv icon

VaPr: Variable-Precision Tensors to Accelerate Robot Motion Planning

Add code
Oct 11, 2023
Figure 1 for VaPr: Variable-Precision Tensors to Accelerate Robot Motion Planning
Figure 2 for VaPr: Variable-Precision Tensors to Accelerate Robot Motion Planning
Figure 3 for VaPr: Variable-Precision Tensors to Accelerate Robot Motion Planning
Figure 4 for VaPr: Variable-Precision Tensors to Accelerate Robot Motion Planning
Viaarxiv icon

CAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device Learning

Add code
May 04, 2023
Figure 1 for CAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device Learning
Figure 2 for CAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device Learning
Figure 3 for CAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device Learning
Figure 4 for CAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device Learning
Viaarxiv icon

AutoSoC: Automating Algorithm-SOC Co-design for Aerial Robots

Add code
Sep 13, 2021
Figure 1 for AutoSoC: Automating Algorithm-SOC Co-design for Aerial Robots
Figure 2 for AutoSoC: Automating Algorithm-SOC Co-design for Aerial Robots
Figure 3 for AutoSoC: Automating Algorithm-SOC Co-design for Aerial Robots
Figure 4 for AutoSoC: Automating Algorithm-SOC Co-design for Aerial Robots
Viaarxiv icon

Quantifying and Maximizing the Benefits of Back-End Noise Adaption on Attention-Based Speech Recognition Models

Add code
May 03, 2021
Figure 1 for Quantifying and Maximizing the Benefits of Back-End Noise Adaption on Attention-Based Speech Recognition Models
Figure 2 for Quantifying and Maximizing the Benefits of Back-End Noise Adaption on Attention-Based Speech Recognition Models
Figure 3 for Quantifying and Maximizing the Benefits of Back-End Noise Adaption on Attention-Based Speech Recognition Models
Figure 4 for Quantifying and Maximizing the Benefits of Back-End Noise Adaption on Attention-Based Speech Recognition Models
Viaarxiv icon