Alert button
Picture for Ngan Le

Ngan Le

Alert button

FREDOM: Fairness Domain Adaptation Approach to Semantic Scene Understanding

Add code
Bookmark button
Alert button
Apr 04, 2023
Thanh-Dat Truong, Ngan Le, Bhiksha Raj, Jackson Cothren, Khoa Luu

Figure 1 for FREDOM: Fairness Domain Adaptation Approach to Semantic Scene Understanding
Figure 2 for FREDOM: Fairness Domain Adaptation Approach to Semantic Scene Understanding
Figure 3 for FREDOM: Fairness Domain Adaptation Approach to Semantic Scene Understanding
Figure 4 for FREDOM: Fairness Domain Adaptation Approach to Semantic Scene Understanding
Viaarxiv icon

Open-Vocabulary Affordance Detection in 3D Point Clouds

Add code
Bookmark button
Alert button
Mar 04, 2023
Toan Nguyen, Minh Nhat Vu, An Vuong, Dzung Nguyen, Thieu Vo, Ngan Le, Anh Nguyen

Figure 1 for Open-Vocabulary Affordance Detection in 3D Point Clouds
Figure 2 for Open-Vocabulary Affordance Detection in 3D Point Clouds
Figure 3 for Open-Vocabulary Affordance Detection in 3D Point Clouds
Figure 4 for Open-Vocabulary Affordance Detection in 3D Point Clouds
Viaarxiv icon

DRG-Net: Interactive Joint Learning of Multi-lesion Segmentation and Classification for Diabetic Retinopathy Grading

Add code
Bookmark button
Alert button
Dec 30, 2022
Hasan Md Tusfiqur, Duy M. H. Nguyen, Mai T. N. Truong, Triet A. Nguyen, Binh T. Nguyen, Michael Barz, Hans-Juergen Profitlich, Ngoc T. T. Than, Ngan Le, Pengtao Xie, Daniel Sonntag

Figure 1 for DRG-Net: Interactive Joint Learning of Multi-lesion Segmentation and Classification for Diabetic Retinopathy Grading
Figure 2 for DRG-Net: Interactive Joint Learning of Multi-lesion Segmentation and Classification for Diabetic Retinopathy Grading
Figure 3 for DRG-Net: Interactive Joint Learning of Multi-lesion Segmentation and Classification for Diabetic Retinopathy Grading
Figure 4 for DRG-Net: Interactive Joint Learning of Multi-lesion Segmentation and Classification for Diabetic Retinopathy Grading
Viaarxiv icon

Contextual Explainable Video Representation: Human Perception-based Understanding

Add code
Bookmark button
Alert button
Dec 17, 2022
Khoa Vo, Kashu Yamazaki, Phong X. Nguyen, Phat Nguyen, Khoa Luu, Ngan Le

Figure 1 for Contextual Explainable Video Representation: Human Perception-based Understanding
Figure 2 for Contextual Explainable Video Representation: Human Perception-based Understanding
Figure 3 for Contextual Explainable Video Representation: Human Perception-based Understanding
Figure 4 for Contextual Explainable Video Representation: Human Perception-based Understanding
Viaarxiv icon

Contextual Explainable Video Representation:\\Human Perception-based Understanding

Add code
Bookmark button
Alert button
Dec 12, 2022
Khoa Vo, Kashu Yamazaki, Phong X. Nguyen, Phat Nguyen, Khoa Luu, Ngan Le

Figure 1 for Contextual Explainable Video Representation:\\Human Perception-based Understanding
Figure 2 for Contextual Explainable Video Representation:\\Human Perception-based Understanding
Figure 3 for Contextual Explainable Video Representation:\\Human Perception-based Understanding
Figure 4 for Contextual Explainable Video Representation:\\Human Perception-based Understanding
Viaarxiv icon

CLIP-TSA: CLIP-Assisted Temporal Self-Attention for Weakly-Supervised Video Anomaly Detection

Add code
Bookmark button
Alert button
Dec 09, 2022
Hyekang Kevin Joo, Khoa Vo, Kashu Yamazaki, Ngan Le

Figure 1 for CLIP-TSA: CLIP-Assisted Temporal Self-Attention for Weakly-Supervised Video Anomaly Detection
Figure 2 for CLIP-TSA: CLIP-Assisted Temporal Self-Attention for Weakly-Supervised Video Anomaly Detection
Figure 3 for CLIP-TSA: CLIP-Assisted Temporal Self-Attention for Weakly-Supervised Video Anomaly Detection
Figure 4 for CLIP-TSA: CLIP-Assisted Temporal Self-Attention for Weakly-Supervised Video Anomaly Detection
Viaarxiv icon

VLTinT: Visual-Linguistic Transformer-in-Transformer for Coherent Video Paragraph Captioning

Add code
Bookmark button
Alert button
Nov 28, 2022
Kashu Yamazaki, Khoa Vo, Sang Truong, Bhiksha Raj, Ngan Le

Figure 1 for VLTinT: Visual-Linguistic Transformer-in-Transformer for Coherent Video Paragraph Captioning
Figure 2 for VLTinT: Visual-Linguistic Transformer-in-Transformer for Coherent Video Paragraph Captioning
Figure 3 for VLTinT: Visual-Linguistic Transformer-in-Transformer for Coherent Video Paragraph Captioning
Figure 4 for VLTinT: Visual-Linguistic Transformer-in-Transformer for Coherent Video Paragraph Captioning
Viaarxiv icon

Multi-Camera Multi-Object Tracking on the Move via Single-Stage Global Association Approach

Add code
Bookmark button
Alert button
Nov 17, 2022
Pha Nguyen, Kha Gia Quach, Chi Nhan Duong, Son Lam Phung, Ngan Le, Khoa Luu

Figure 1 for Multi-Camera Multi-Object Tracking on the Move via Single-Stage Global Association Approach
Figure 2 for Multi-Camera Multi-Object Tracking on the Move via Single-Stage Global Association Approach
Figure 3 for Multi-Camera Multi-Object Tracking on the Move via Single-Stage Global Association Approach
Figure 4 for Multi-Camera Multi-Object Tracking on the Move via Single-Stage Global Association Approach
Viaarxiv icon

AISFormer: Amodal Instance Segmentation with Transformer

Add code
Bookmark button
Alert button
Oct 13, 2022
Minh Tran, Khoa Vo, Kashu Yamazaki, Arthur Fernandes, Michael Kidd, Ngan Le

Figure 1 for AISFormer: Amodal Instance Segmentation with Transformer
Figure 2 for AISFormer: Amodal Instance Segmentation with Transformer
Figure 3 for AISFormer: Amodal Instance Segmentation with Transformer
Figure 4 for AISFormer: Amodal Instance Segmentation with Transformer
Viaarxiv icon