Alert button
Picture for Yu Yan

Yu Yan

Alert button

Harmonic and Interharmonic Detection in Power Systems Based on Fractal-Optimized Variational Mode Decomposition

Add code
Bookmark button
Alert button
May 16, 2024
Pei Yuhang, Yu Min, Yu Yan

Viaarxiv icon

Scene Summarization: Clustering Scene Videos into Spatially Diverse Frames

Add code
Bookmark button
Alert button
Nov 28, 2023
Chao Chen, Mingzhi Zhu, Ankush Pratap Singh, Yu Yan, Felix Juefei Xu, Chen Feng

Viaarxiv icon

Enhancing Mobile Face Anti-Spoofing: A Robust Framework for Diverse Attack Types under Screen Flash

Add code
Bookmark button
Alert button
Aug 29, 2023
Weihua Liu, Chaochao Lin, Yu Yan

Figure 1 for Enhancing Mobile Face Anti-Spoofing: A Robust Framework for Diverse Attack Types under Screen Flash
Figure 2 for Enhancing Mobile Face Anti-Spoofing: A Robust Framework for Diverse Attack Types under Screen Flash
Figure 3 for Enhancing Mobile Face Anti-Spoofing: A Robust Framework for Diverse Attack Types under Screen Flash
Figure 4 for Enhancing Mobile Face Anti-Spoofing: A Robust Framework for Diverse Attack Types under Screen Flash
Viaarxiv icon

Duet: efficient and scalable hybriD neUral rElation undersTanding

Add code
Bookmark button
Alert button
Jul 28, 2023
Kaixin Zhang, Hongzhi Wang, Yabin Lu, Ziqi Li, Chang Shu, Yu Yan, Donghua Yang

Figure 1 for Duet: efficient and scalable hybriD neUral rElation undersTanding
Figure 2 for Duet: efficient and scalable hybriD neUral rElation undersTanding
Figure 3 for Duet: efficient and scalable hybriD neUral rElation undersTanding
Figure 4 for Duet: efficient and scalable hybriD neUral rElation undersTanding
Viaarxiv icon

A Self-Paced Mixed Distillation Method for Non-Autoregressive Generation

Add code
Bookmark button
Alert button
May 23, 2022
Weizhen Qi, Yeyun Gong, Yelong Shen, Jian Jiao, Yu Yan, Houqiang Li, Ruofei Zhang, Weizhu Chen, Nan Duan

Figure 1 for A Self-Paced Mixed Distillation Method for Non-Autoregressive Generation
Figure 2 for A Self-Paced Mixed Distillation Method for Non-Autoregressive Generation
Figure 3 for A Self-Paced Mixed Distillation Method for Non-Autoregressive Generation
Figure 4 for A Self-Paced Mixed Distillation Method for Non-Autoregressive Generation
Viaarxiv icon

Factorisation-based Image Labelling

Add code
Bookmark button
Alert button
Nov 19, 2021
Yu Yan, Yael Balbastre, Mikael Brudfors, John Ashburner

Figure 1 for Factorisation-based Image Labelling
Figure 2 for Factorisation-based Image Labelling
Figure 3 for Factorisation-based Image Labelling
Figure 4 for Factorisation-based Image Labelling
Viaarxiv icon

FastSeq: Make Sequence Generation Faster

Add code
Bookmark button
Alert button
Jun 08, 2021
Yu Yan, Fei Hu, Jiusheng Chen, Nikhil Bhendawade, Ting Ye, Yeyun Gong, Nan Duan, Desheng Cui, Bingyu Chi, Ruifei Zhang

Figure 1 for FastSeq: Make Sequence Generation Faster
Figure 2 for FastSeq: Make Sequence Generation Faster
Figure 3 for FastSeq: Make Sequence Generation Faster
Figure 4 for FastSeq: Make Sequence Generation Faster
Viaarxiv icon

EL-Attention: Memory Efficient Lossless Attention for Generation

Add code
Bookmark button
Alert button
May 11, 2021
Yu Yan, Jiusheng Chen, Weizhen Qi, Nikhil Bhendawade, Yeyun Gong, Nan Duan, Ruofei Zhang

Figure 1 for EL-Attention: Memory Efficient Lossless Attention for Generation
Figure 2 for EL-Attention: Memory Efficient Lossless Attention for Generation
Figure 3 for EL-Attention: Memory Efficient Lossless Attention for Generation
Figure 4 for EL-Attention: Memory Efficient Lossless Attention for Generation
Viaarxiv icon

ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation

Add code
Bookmark button
Alert button
Apr 16, 2021
Weizhen Qi, Yeyun Gong, Yu Yan, Can Xu, Bolun Yao, Bartuer Zhou, Biao Cheng, Daxin Jiang, Jiusheng Chen, Ruofei Zhang, Houqiang Li, Nan Duan

Figure 1 for ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation
Figure 2 for ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation
Figure 3 for ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation
Figure 4 for ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation
Viaarxiv icon

BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining

Add code
Bookmark button
Alert button
Dec 31, 2020
Weizhen Qi, Yeyun Gong, Jian Jiao, Yu Yan, Dayiheng Liu, Weizhu Chen, Kewen Tang, Houqiang Li, Jiusheng Chen, Ruofei Zhang, Ming Zhou, Nan Duan

Figure 1 for BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining
Figure 2 for BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining
Figure 3 for BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining
Figure 4 for BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining
Viaarxiv icon