Alert button
Picture for Kui Jia

Kui Jia

Alert button

On the Robustness of Open-World Test-Time Training: Self-Training with Dynamic Prototype Expansion

Add code
Bookmark button
Alert button
Aug 19, 2023
Yushu Li, Xun Xu, Yongyi Su, Kui Jia

Figure 1 for On the Robustness of Open-World Test-Time Training: Self-Training with Dynamic Prototype Expansion
Figure 2 for On the Robustness of Open-World Test-Time Training: Self-Training with Dynamic Prototype Expansion
Figure 3 for On the Robustness of Open-World Test-Time Training: Self-Training with Dynamic Prototype Expansion
Figure 4 for On the Robustness of Open-World Test-Time Training: Self-Training with Dynamic Prototype Expansion
Viaarxiv icon

VI-Net: Boosting Category-level 6D Object Pose Estimation via Learning Decoupled Rotations on the Spherical Representations

Add code
Bookmark button
Alert button
Aug 19, 2023
Jiehong Lin, Zewei Wei, Yabin Zhang, Kui Jia

Figure 1 for VI-Net: Boosting Category-level 6D Object Pose Estimation via Learning Decoupled Rotations on the Spherical Representations
Figure 2 for VI-Net: Boosting Category-level 6D Object Pose Estimation via Learning Decoupled Rotations on the Spherical Representations
Figure 3 for VI-Net: Boosting Category-level 6D Object Pose Estimation via Learning Decoupled Rotations on the Spherical Representations
Figure 4 for VI-Net: Boosting Category-level 6D Object Pose Estimation via Learning Decoupled Rotations on the Spherical Representations
Viaarxiv icon

ShuffleMix: Improving Representations via Channel-Wise Shuffle of Interpolated Hidden States

Add code
Bookmark button
Alert button
May 30, 2023
Kangjun Liu, Ke Chen, Lihua Guo, Yaowei Wang, Kui Jia

Figure 1 for ShuffleMix: Improving Representations via Channel-Wise Shuffle of Interpolated Hidden States
Figure 2 for ShuffleMix: Improving Representations via Channel-Wise Shuffle of Interpolated Hidden States
Figure 3 for ShuffleMix: Improving Representations via Channel-Wise Shuffle of Interpolated Hidden States
Figure 4 for ShuffleMix: Improving Representations via Channel-Wise Shuffle of Interpolated Hidden States
Viaarxiv icon

Improving Deep Representation Learning via Auxiliary Learnable Target Coding

Add code
Bookmark button
Alert button
May 30, 2023
Kangjun Liu, Ke Chen, Yaowei Wang, Kui Jia

Figure 1 for Improving Deep Representation Learning via Auxiliary Learnable Target Coding
Figure 2 for Improving Deep Representation Learning via Auxiliary Learnable Target Coding
Figure 3 for Improving Deep Representation Learning via Auxiliary Learnable Target Coding
Figure 4 for Improving Deep Representation Learning via Auxiliary Learnable Target Coding
Viaarxiv icon

Universal Domain Adaptation from Foundation Models

Add code
Bookmark button
Alert button
May 18, 2023
Bin Deng, Kui Jia

Figure 1 for Universal Domain Adaptation from Foundation Models
Figure 2 for Universal Domain Adaptation from Foundation Models
Figure 3 for Universal Domain Adaptation from Foundation Models
Figure 4 for Universal Domain Adaptation from Foundation Models
Viaarxiv icon

Manifold-Aware Self-Training for Unsupervised Domain Adaptation on Regressing 6D Object Pose

Add code
Bookmark button
Alert button
May 18, 2023
Yichen Zhang, Jiehong Lin, Ke Chen, Zelin Xu, Yaowei Wang, Kui Jia

Figure 1 for Manifold-Aware Self-Training for Unsupervised Domain Adaptation on Regressing 6D Object Pose
Figure 2 for Manifold-Aware Self-Training for Unsupervised Domain Adaptation on Regressing 6D Object Pose
Figure 3 for Manifold-Aware Self-Training for Unsupervised Domain Adaptation on Regressing 6D Object Pose
Figure 4 for Manifold-Aware Self-Training for Unsupervised Domain Adaptation on Regressing 6D Object Pose
Viaarxiv icon

STFAR: Improving Object Detection Robustness at Test-Time by Self-Training with Feature Alignment Regularization

Add code
Bookmark button
Alert button
Mar 31, 2023
Yijin Chen, Xun Xu, Yongyi Su, Kui Jia

Figure 1 for STFAR: Improving Object Detection Robustness at Test-Time by Self-Training with Feature Alignment Regularization
Figure 2 for STFAR: Improving Object Detection Robustness at Test-Time by Self-Training with Feature Alignment Regularization
Figure 3 for STFAR: Improving Object Detection Robustness at Test-Time by Self-Training with Feature Alignment Regularization
Figure 4 for STFAR: Improving Object Detection Robustness at Test-Time by Self-Training with Feature Alignment Regularization
Viaarxiv icon

Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation

Add code
Bookmark button
Alert button
Mar 28, 2023
Rui Chen, Yongwei Chen, Ningxin Jiao, Kui Jia

Figure 1 for Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation
Figure 2 for Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation
Figure 3 for Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation
Figure 4 for Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation
Viaarxiv icon

A New Benchmark: On the Utility of Synthetic Data with Blender for Bare Supervised Learning and Downstream Domain Adaptation

Add code
Bookmark button
Alert button
Mar 23, 2023
Hui Tang, Kui Jia

Figure 1 for A New Benchmark: On the Utility of Synthetic Data with Blender for Bare Supervised Learning and Downstream Domain Adaptation
Figure 2 for A New Benchmark: On the Utility of Synthetic Data with Blender for Bare Supervised Learning and Downstream Domain Adaptation
Figure 3 for A New Benchmark: On the Utility of Synthetic Data with Blender for Bare Supervised Learning and Downstream Domain Adaptation
Figure 4 for A New Benchmark: On the Utility of Synthetic Data with Blender for Bare Supervised Learning and Downstream Domain Adaptation
Viaarxiv icon