Picture for Pengfei Xu

Pengfei Xu

ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework for LiDAR Point Cloud Segmentation

Add code
Sep 07, 2020
Figure 1 for ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework for LiDAR Point Cloud Segmentation
Figure 2 for ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework for LiDAR Point Cloud Segmentation
Figure 3 for ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework for LiDAR Point Cloud Segmentation
Figure 4 for ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework for LiDAR Point Cloud Segmentation
Viaarxiv icon

Emotion-Based End-to-End Matching Between Image and Music in Valence-Arousal Space

Add code
Aug 22, 2020
Figure 1 for Emotion-Based End-to-End Matching Between Image and Music in Valence-Arousal Space
Figure 2 for Emotion-Based End-to-End Matching Between Image and Music in Valence-Arousal Space
Figure 3 for Emotion-Based End-to-End Matching Between Image and Music in Valence-Arousal Space
Figure 4 for Emotion-Based End-to-End Matching Between Image and Music in Valence-Arousal Space
Viaarxiv icon

Rethinking Distributional Matching Based Domain Adaptation

Add code
Jul 03, 2020
Figure 1 for Rethinking Distributional Matching Based Domain Adaptation
Figure 2 for Rethinking Distributional Matching Based Domain Adaptation
Figure 3 for Rethinking Distributional Matching Based Domain Adaptation
Figure 4 for Rethinking Distributional Matching Based Domain Adaptation
Viaarxiv icon

TIMELY: Pushing Data Movements and Interfaces in PIM Accelerators Towards Local and in Time Domain

Add code
May 03, 2020
Figure 1 for TIMELY: Pushing Data Movements and Interfaces in PIM Accelerators Towards Local and in Time Domain
Figure 2 for TIMELY: Pushing Data Movements and Interfaces in PIM Accelerators Towards Local and in Time Domain
Figure 3 for TIMELY: Pushing Data Movements and Interfaces in PIM Accelerators Towards Local and in Time Domain
Figure 4 for TIMELY: Pushing Data Movements and Interfaces in PIM Accelerators Towards Local and in Time Domain
Viaarxiv icon

Multi-source Domain Adaptation in the Deep Learning Era: A Systematic Survey

Add code
Feb 26, 2020
Figure 1 for Multi-source Domain Adaptation in the Deep Learning Era: A Systematic Survey
Figure 2 for Multi-source Domain Adaptation in the Deep Learning Era: A Systematic Survey
Figure 3 for Multi-source Domain Adaptation in the Deep Learning Era: A Systematic Survey
Figure 4 for Multi-source Domain Adaptation in the Deep Learning Era: A Systematic Survey
Viaarxiv icon

DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures

Add code
Feb 26, 2020
Figure 1 for DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures
Figure 2 for DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures
Figure 3 for DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures
Figure 4 for DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures
Viaarxiv icon

MADAN: Multi-source Adversarial Domain Aggregation Network for Domain Adaptation

Add code
Feb 19, 2020
Figure 1 for MADAN: Multi-source Adversarial Domain Aggregation Network for Domain Adaptation
Figure 2 for MADAN: Multi-source Adversarial Domain Aggregation Network for Domain Adaptation
Figure 3 for MADAN: Multi-source Adversarial Domain Aggregation Network for Domain Adaptation
Figure 4 for MADAN: Multi-source Adversarial Domain Aggregation Network for Domain Adaptation
Viaarxiv icon

An End-to-End Visual-Audio Attention Network for Emotion Recognition in User-Generated Videos

Add code
Feb 12, 2020
Figure 1 for An End-to-End Visual-Audio Attention Network for Emotion Recognition in User-Generated Videos
Figure 2 for An End-to-End Visual-Audio Attention Network for Emotion Recognition in User-Generated Videos
Figure 3 for An End-to-End Visual-Audio Attention Network for Emotion Recognition in User-Generated Videos
Figure 4 for An End-to-End Visual-Audio Attention Network for Emotion Recognition in User-Generated Videos
Viaarxiv icon

AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs

Add code
Jan 06, 2020
Figure 1 for AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs
Figure 2 for AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs
Figure 3 for AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs
Figure 4 for AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs
Viaarxiv icon

Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference

Add code
Jan 03, 2020
Figure 1 for Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference
Figure 2 for Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference
Figure 3 for Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference
Figure 4 for Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference
Viaarxiv icon