Alert button
Picture for Cihang Xie

Cihang Xie

Alert button

University of California, Santa Cruz

An Inverse Scaling Law for CLIP Training

Add code
Bookmark button
Alert button
May 11, 2023
Xianhang Li, Zeyu Wang, Cihang Xie

Figure 1 for An Inverse Scaling Law for CLIP Training
Figure 2 for An Inverse Scaling Law for CLIP Training
Figure 3 for An Inverse Scaling Law for CLIP Training
Figure 4 for An Inverse Scaling Law for CLIP Training
Viaarxiv icon

Diffusion Models as Masked Autoencoders

Add code
Bookmark button
Alert button
Apr 06, 2023
Chen Wei, Karttikeya Mangalam, Po-Yao Huang, Yanghao Li, Haoqi Fan, Hu Xu, Huiyu Wang, Cihang Xie, Alan Yuille, Christoph Feichtenhofer

Figure 1 for Diffusion Models as Masked Autoencoders
Figure 2 for Diffusion Models as Masked Autoencoders
Figure 3 for Diffusion Models as Masked Autoencoders
Figure 4 for Diffusion Models as Masked Autoencoders
Viaarxiv icon

On the Adversarial Robustness of Camera-based 3D Object Detection

Add code
Bookmark button
Alert button
Jan 25, 2023
Shaoyuan Xie, Zichao Li, Zeyu Wang, Cihang Xie

Figure 1 for On the Adversarial Robustness of Camera-based 3D Object Detection
Figure 2 for On the Adversarial Robustness of Camera-based 3D Object Detection
Figure 3 for On the Adversarial Robustness of Camera-based 3D Object Detection
Figure 4 for On the Adversarial Robustness of Camera-based 3D Object Detection
Viaarxiv icon

Benchmarking Robustness in Neural Radiance Fields

Add code
Bookmark button
Alert button
Jan 10, 2023
Chen Wang, Angtian Wang, Junbo Li, Alan Yuille, Cihang Xie

Figure 1 for Benchmarking Robustness in Neural Radiance Fields
Figure 2 for Benchmarking Robustness in Neural Radiance Fields
Figure 3 for Benchmarking Robustness in Neural Radiance Fields
Figure 4 for Benchmarking Robustness in Neural Radiance Fields
Viaarxiv icon

Unleashing the Power of Visual Prompting At the Pixel Level

Add code
Bookmark button
Alert button
Dec 20, 2022
Junyang Wu, Xianhang Li, Chen Wei, Huiyu Wang, Alan Yuille, Yuyin Zhou, Cihang Xie

Figure 1 for Unleashing the Power of Visual Prompting At the Pixel Level
Figure 2 for Unleashing the Power of Visual Prompting At the Pixel Level
Figure 3 for Unleashing the Power of Visual Prompting At the Pixel Level
Figure 4 for Unleashing the Power of Visual Prompting At the Pixel Level
Viaarxiv icon

Navigation as the Attacker Wishes? Towards Building Byzantine-Robust Embodied Agents under Federated Learning

Add code
Bookmark button
Alert button
Dec 02, 2022
Yunchao Zhang, Zonglin Di, Kaiwen Zhou, Cihang Xie, Xin Eric Wang

Figure 1 for Navigation as the Attacker Wishes? Towards Building Byzantine-Robust Embodied Agents under Federated Learning
Figure 2 for Navigation as the Attacker Wishes? Towards Building Byzantine-Robust Embodied Agents under Federated Learning
Figure 3 for Navigation as the Attacker Wishes? Towards Building Byzantine-Robust Embodied Agents under Federated Learning
Figure 4 for Navigation as the Attacker Wishes? Towards Building Byzantine-Robust Embodied Agents under Federated Learning
Viaarxiv icon

Localization vs. Semantics: How Can Language Benefit Visual Representation Learning?

Add code
Bookmark button
Alert button
Dec 01, 2022
Zhuowan Li, Cihang Xie, Benjamin Van Durme, Alan Yuille

Figure 1 for Localization vs. Semantics: How Can Language Benefit Visual Representation Learning?
Figure 2 for Localization vs. Semantics: How Can Language Benefit Visual Representation Learning?
Figure 3 for Localization vs. Semantics: How Can Language Benefit Visual Representation Learning?
Figure 4 for Localization vs. Semantics: How Can Language Benefit Visual Representation Learning?
Viaarxiv icon

SMAUG: Sparse Masked Autoencoder for Efficient Video-Language Pre-training

Add code
Bookmark button
Alert button
Nov 30, 2022
Yuanze Lin, Chen Wei, Huiyu Wang, Alan Yuille, Cihang Xie

Figure 1 for SMAUG: Sparse Masked Autoencoder for Efficient Video-Language Pre-training
Figure 2 for SMAUG: Sparse Masked Autoencoder for Efficient Video-Language Pre-training
Figure 3 for SMAUG: Sparse Masked Autoencoder for Efficient Video-Language Pre-training
Figure 4 for SMAUG: Sparse Masked Autoencoder for Efficient Video-Language Pre-training
Viaarxiv icon

Finding Differences Between Transformers and ConvNets Using Counterfactual Simulation Testing

Add code
Bookmark button
Alert button
Nov 29, 2022
Nataniel Ruiz, Sarah Adel Bargal, Cihang Xie, Kate Saenko, Stan Sclaroff

Figure 1 for Finding Differences Between Transformers and ConvNets Using Counterfactual Simulation Testing
Figure 2 for Finding Differences Between Transformers and ConvNets Using Counterfactual Simulation Testing
Figure 3 for Finding Differences Between Transformers and ConvNets Using Counterfactual Simulation Testing
Figure 4 for Finding Differences Between Transformers and ConvNets Using Counterfactual Simulation Testing
Viaarxiv icon