Picture for Jamie Shotton

Jamie Shotton

WayveScenes101: A Dataset and Benchmark for Novel View Synthesis in Autonomous Driving

Add code
Jul 11, 2024
Viaarxiv icon

CarLLaVA: Vision language models for camera-only closed-loop driving

Add code
Jun 14, 2024
Viaarxiv icon

LangProp: A code optimization framework using Language Models applied to driving

Add code
Jan 18, 2024
Figure 1 for LangProp: A code optimization framework using Language Models applied to driving
Figure 2 for LangProp: A code optimization framework using Language Models applied to driving
Figure 3 for LangProp: A code optimization framework using Language Models applied to driving
Figure 4 for LangProp: A code optimization framework using Language Models applied to driving
Viaarxiv icon

LingoQA: Video Question Answering for Autonomous Driving

Add code
Dec 21, 2023
Viaarxiv icon

Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving

Add code
Oct 13, 2023
Figure 1 for Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving
Figure 2 for Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving
Figure 3 for Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving
Figure 4 for Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving
Viaarxiv icon

GAIA-1: A Generative World Model for Autonomous Driving

Add code
Sep 29, 2023
Figure 1 for GAIA-1: A Generative World Model for Autonomous Driving
Figure 2 for GAIA-1: A Generative World Model for Autonomous Driving
Figure 3 for GAIA-1: A Generative World Model for Autonomous Driving
Figure 4 for GAIA-1: A Generative World Model for Autonomous Driving
Viaarxiv icon

Linking vision and motion for self-supervised object-centric perception

Add code
Jul 14, 2023
Figure 1 for Linking vision and motion for self-supervised object-centric perception
Figure 2 for Linking vision and motion for self-supervised object-centric perception
Figure 3 for Linking vision and motion for self-supervised object-centric perception
Figure 4 for Linking vision and motion for self-supervised object-centric perception
Viaarxiv icon

Model-Based Imitation Learning for Urban Driving

Add code
Oct 14, 2022
Figure 1 for Model-Based Imitation Learning for Urban Driving
Figure 2 for Model-Based Imitation Learning for Urban Driving
Figure 3 for Model-Based Imitation Learning for Urban Driving
Figure 4 for Model-Based Imitation Learning for Urban Driving
Viaarxiv icon

Fake It Till You Make It: Face analysis in the wild using synthetic data alone

Add code
Oct 05, 2021
Figure 1 for Fake It Till You Make It: Face analysis in the wild using synthetic data alone
Figure 2 for Fake It Till You Make It: Face analysis in the wild using synthetic data alone
Figure 3 for Fake It Till You Make It: Face analysis in the wild using synthetic data alone
Figure 4 for Fake It Till You Make It: Face analysis in the wild using synthetic data alone
Viaarxiv icon

FastNeRF: High-Fidelity Neural Rendering at 200FPS

Add code
Apr 15, 2021
Figure 1 for FastNeRF: High-Fidelity Neural Rendering at 200FPS
Figure 2 for FastNeRF: High-Fidelity Neural Rendering at 200FPS
Figure 3 for FastNeRF: High-Fidelity Neural Rendering at 200FPS
Figure 4 for FastNeRF: High-Fidelity Neural Rendering at 200FPS
Viaarxiv icon