Alert button
Picture for Angjoo Kanazawa

Angjoo Kanazawa

Alert button

Decoupling Human and Camera Motion from Videos in the Wild

Add code
Bookmark button
Alert button
Mar 20, 2023
Vickie Ye, Georgios Pavlakos, Jitendra Malik, Angjoo Kanazawa

Figure 1 for Decoupling Human and Camera Motion from Videos in the Wild
Figure 2 for Decoupling Human and Camera Motion from Videos in the Wild
Figure 3 for Decoupling Human and Camera Motion from Videos in the Wild
Figure 4 for Decoupling Human and Camera Motion from Videos in the Wild
Viaarxiv icon

LERF: Language Embedded Radiance Fields

Add code
Bookmark button
Alert button
Mar 16, 2023
Justin Kerr, Chung Min Kim, Ken Goldberg, Angjoo Kanazawa, Matthew Tancik

Figure 1 for LERF: Language Embedded Radiance Fields
Figure 2 for LERF: Language Embedded Radiance Fields
Figure 3 for LERF: Language Embedded Radiance Fields
Figure 4 for LERF: Language Embedded Radiance Fields
Viaarxiv icon

Nerfstudio: A Modular Framework for Neural Radiance Field Development

Add code
Bookmark button
Alert button
Feb 08, 2023
Matthew Tancik, Ethan Weber, Evonne Ng, Ruilong Li, Brent Yi, Justin Kerr, Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, Angjoo Kanazawa

Figure 1 for Nerfstudio: A Modular Framework for Neural Radiance Field Development
Figure 2 for Nerfstudio: A Modular Framework for Neural Radiance Field Development
Figure 3 for Nerfstudio: A Modular Framework for Neural Radiance Field Development
Figure 4 for Nerfstudio: A Modular Framework for Neural Radiance Field Development
Viaarxiv icon

K-Planes: Explicit Radiance Fields in Space, Time, and Appearance

Add code
Bookmark button
Alert button
Jan 24, 2023
Sara Fridovich-Keil, Giacomo Meanti, Frederik Warburg, Benjamin Recht, Angjoo Kanazawa

Figure 1 for K-Planes: Explicit Radiance Fields in Space, Time, and Appearance
Figure 2 for K-Planes: Explicit Radiance Fields in Space, Time, and Appearance
Figure 3 for K-Planes: Explicit Radiance Fields in Space, Time, and Appearance
Figure 4 for K-Planes: Explicit Radiance Fields in Space, Time, and Appearance
Viaarxiv icon

Monocular Dynamic View Synthesis: A Reality Check

Add code
Bookmark button
Alert button
Oct 24, 2022
Hang Gao, Ruilong Li, Shubham Tulsiani, Bryan Russell, Angjoo Kanazawa

Figure 1 for Monocular Dynamic View Synthesis: A Reality Check
Figure 2 for Monocular Dynamic View Synthesis: A Reality Check
Figure 3 for Monocular Dynamic View Synthesis: A Reality Check
Figure 4 for Monocular Dynamic View Synthesis: A Reality Check
Viaarxiv icon

NerfAcc: A General NeRF Acceleration Toolbox

Add code
Bookmark button
Alert button
Oct 10, 2022
Ruilong Li, Matthew Tancik, Angjoo Kanazawa

Figure 1 for NerfAcc: A General NeRF Acceleration Toolbox
Figure 2 for NerfAcc: A General NeRF Acceleration Toolbox
Figure 3 for NerfAcc: A General NeRF Acceleration Toolbox
Figure 4 for NerfAcc: A General NeRF Acceleration Toolbox
Viaarxiv icon

Studying Bias in GANs through the Lens of Race

Add code
Bookmark button
Alert button
Sep 15, 2022
Vongani H. Maluleke, Neerja Thakkar, Tim Brooks, Ethan Weber, Trevor Darrell, Alexei A. Efros, Angjoo Kanazawa, Devin Guillory

Figure 1 for Studying Bias in GANs through the Lens of Race
Figure 2 for Studying Bias in GANs through the Lens of Race
Figure 3 for Studying Bias in GANs through the Lens of Race
Figure 4 for Studying Bias in GANs through the Lens of Race
Viaarxiv icon

The One Where They Reconstructed 3D Humans and Environments in TV Shows

Add code
Bookmark button
Alert button
Jul 28, 2022
Georgios Pavlakos, Ethan Weber, Matthew Tancik, Angjoo Kanazawa

Figure 1 for The One Where They Reconstructed 3D Humans and Environments in TV Shows
Figure 2 for The One Where They Reconstructed 3D Humans and Environments in TV Shows
Figure 3 for The One Where They Reconstructed 3D Humans and Environments in TV Shows
Figure 4 for The One Where They Reconstructed 3D Humans and Environments in TV Shows
Viaarxiv icon