Alert button

"Image": models, code, and papers
Alert button

DETR with Additional Global Aggregation for Cross-domain Weakly Supervised Object Detection

Apr 14, 2023
Zongheng Tang, Yifan Sun, Si Liu, Yi Yang

Figure 1 for DETR with Additional Global Aggregation for Cross-domain Weakly Supervised Object Detection
Figure 2 for DETR with Additional Global Aggregation for Cross-domain Weakly Supervised Object Detection
Figure 3 for DETR with Additional Global Aggregation for Cross-domain Weakly Supervised Object Detection
Figure 4 for DETR with Additional Global Aggregation for Cross-domain Weakly Supervised Object Detection
Viaarxiv icon

BiTrackGAN: Cascaded CycleGANs to Constraint Face Aging

Apr 22, 2023
Tsung-Han Kuo, Zhenge Jia, Tei-Wei Kuo, Jingtong Hu

Figure 1 for BiTrackGAN: Cascaded CycleGANs to Constraint Face Aging
Figure 2 for BiTrackGAN: Cascaded CycleGANs to Constraint Face Aging
Figure 3 for BiTrackGAN: Cascaded CycleGANs to Constraint Face Aging
Figure 4 for BiTrackGAN: Cascaded CycleGANs to Constraint Face Aging
Viaarxiv icon

PTQD: Accurate Post-Training Quantization for Diffusion Models

May 18, 2023
Yefei He, Luping Liu, Jing Liu, Weijia Wu, Hong Zhou, Bohan Zhuang

Figure 1 for PTQD: Accurate Post-Training Quantization for Diffusion Models
Figure 2 for PTQD: Accurate Post-Training Quantization for Diffusion Models
Figure 3 for PTQD: Accurate Post-Training Quantization for Diffusion Models
Figure 4 for PTQD: Accurate Post-Training Quantization for Diffusion Models
Viaarxiv icon

Counterfactuals for Design: A Model-Agnostic Method For Design Recommendations

May 18, 2023
Lyle Regenwetter, Yazan Abu Obaideh, Faez Ahmed

Figure 1 for Counterfactuals for Design: A Model-Agnostic Method For Design Recommendations
Figure 2 for Counterfactuals for Design: A Model-Agnostic Method For Design Recommendations
Figure 3 for Counterfactuals for Design: A Model-Agnostic Method For Design Recommendations
Figure 4 for Counterfactuals for Design: A Model-Agnostic Method For Design Recommendations
Viaarxiv icon

A Comprehensive Survey on Segment Anything Model for Vision and Beyond

Add code
Bookmark button
Alert button
May 19, 2023
Chunhui Zhang, Li Liu, Yawen Cui, Guanjie Huang, Weilin Lin, Yiqian Yang, Yuehong Hu

Figure 1 for A Comprehensive Survey on Segment Anything Model for Vision and Beyond
Figure 2 for A Comprehensive Survey on Segment Anything Model for Vision and Beyond
Figure 3 for A Comprehensive Survey on Segment Anything Model for Vision and Beyond
Figure 4 for A Comprehensive Survey on Segment Anything Model for Vision and Beyond
Viaarxiv icon

Surgical-VQLA: Transformer with Gated Vision-Language Embedding for Visual Question Localized-Answering in Robotic Surgery

Add code
Bookmark button
Alert button
May 19, 2023
Long Bai, Mobarakol Islam, Lalithkumar Seenivasan, Hongliang Ren

Figure 1 for Surgical-VQLA: Transformer with Gated Vision-Language Embedding for Visual Question Localized-Answering in Robotic Surgery
Figure 2 for Surgical-VQLA: Transformer with Gated Vision-Language Embedding for Visual Question Localized-Answering in Robotic Surgery
Figure 3 for Surgical-VQLA: Transformer with Gated Vision-Language Embedding for Visual Question Localized-Answering in Robotic Surgery
Figure 4 for Surgical-VQLA: Transformer with Gated Vision-Language Embedding for Visual Question Localized-Answering in Robotic Surgery
Viaarxiv icon

AutoCoreset: An Automatic Practical Coreset Construction Framework

Add code
Bookmark button
Alert button
May 19, 2023
Alaa Maalouf, Murad Tukan, Vladimir Braverman, Daniela Rus

Figure 1 for AutoCoreset: An Automatic Practical Coreset Construction Framework
Figure 2 for AutoCoreset: An Automatic Practical Coreset Construction Framework
Figure 3 for AutoCoreset: An Automatic Practical Coreset Construction Framework
Figure 4 for AutoCoreset: An Automatic Practical Coreset Construction Framework
Viaarxiv icon

Transformer-based model for monocular visual odometry: a video understanding approach

Add code
Bookmark button
Alert button
May 10, 2023
André O. Françani, Marcos R. O. A. Maximo

Figure 1 for Transformer-based model for monocular visual odometry: a video understanding approach
Figure 2 for Transformer-based model for monocular visual odometry: a video understanding approach
Figure 3 for Transformer-based model for monocular visual odometry: a video understanding approach
Figure 4 for Transformer-based model for monocular visual odometry: a video understanding approach
Viaarxiv icon

The Image of the Process Interpretation of Regular Expressions is Not Closed under Bisimulation Collapse

Mar 15, 2023
Clemens Grabmayer

Viaarxiv icon

Focus on the Challenges: Analysis of a User-friendly Data Search Approach with CLIP in the Automotive Domain

Apr 21, 2023
Philipp Rigoll, Patrick Petersen, Hanno Stage, Lennart Ries, Eric Sax

Figure 1 for Focus on the Challenges: Analysis of a User-friendly Data Search Approach with CLIP in the Automotive Domain
Figure 2 for Focus on the Challenges: Analysis of a User-friendly Data Search Approach with CLIP in the Automotive Domain
Figure 3 for Focus on the Challenges: Analysis of a User-friendly Data Search Approach with CLIP in the Automotive Domain
Figure 4 for Focus on the Challenges: Analysis of a User-friendly Data Search Approach with CLIP in the Automotive Domain
Viaarxiv icon