Alert button
Picture for Xiaokang Yang

Xiaokang Yang

Alert button

A K-variate Time Series Is Worth K Words: Evolution of the Vanilla Transformer Architecture for Long-term Multivariate Time Series Forecasting

Add code
Bookmark button
Alert button
Dec 06, 2022
Zanwei Zhou, Ruizhe Zhong, Chen Yang, Yan Wang, Xiaokang Yang, Wei Shen

Figure 1 for A K-variate Time Series Is Worth K Words: Evolution of the Vanilla Transformer Architecture for Long-term Multivariate Time Series Forecasting
Figure 2 for A K-variate Time Series Is Worth K Words: Evolution of the Vanilla Transformer Architecture for Long-term Multivariate Time Series Forecasting
Figure 3 for A K-variate Time Series Is Worth K Words: Evolution of the Vanilla Transformer Architecture for Long-term Multivariate Time Series Forecasting
Figure 4 for A K-variate Time Series Is Worth K Words: Evolution of the Vanilla Transformer Architecture for Long-term Multivariate Time Series Forecasting
Viaarxiv icon

Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition

Add code
Bookmark button
Alert button
Oct 13, 2022
Shuai Jia, Bangjie Yin, Taiping Yao, Shouhong Ding, Chunhua Shen, Xiaokang Yang, Chao Ma

Figure 1 for Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition
Figure 2 for Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition
Figure 3 for Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition
Figure 4 for Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition
Viaarxiv icon

Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop

Add code
Bookmark button
Alert button
Oct 03, 2022
Weixia Zhang, Dingquan Li, Xiongkuo Min, Guangtao Zhai, Guodong Guo, Xiaokang Yang, Kede Ma

Figure 1 for Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Figure 2 for Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Figure 3 for Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Figure 4 for Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Viaarxiv icon

Perceptual Quality Assessment of Omnidirectional Images

Add code
Bookmark button
Alert button
Jul 06, 2022
Huiyu Duan, Guangtao Zhai, Xiongkuo Min, Yucheng Zhu, Yi Fang, Xiaokang Yang

Figure 1 for Perceptual Quality Assessment of Omnidirectional Images
Figure 2 for Perceptual Quality Assessment of Omnidirectional Images
Figure 3 for Perceptual Quality Assessment of Omnidirectional Images
Figure 4 for Perceptual Quality Assessment of Omnidirectional Images
Viaarxiv icon

A Survey on Label-efficient Deep Segmentation: Bridging the Gap between Weak Supervision and Dense Prediction

Add code
Bookmark button
Alert button
Jul 04, 2022
Wei Shen, Zelin Peng, Xuehui Wang, Huayu Wang, Jiazhong Cen, Dongsheng Jiang, Lingxi Xie, Xiaokang Yang, Qi Tian

Figure 1 for A Survey on Label-efficient Deep Segmentation: Bridging the Gap between Weak Supervision and Dense Prediction
Figure 2 for A Survey on Label-efficient Deep Segmentation: Bridging the Gap between Weak Supervision and Dense Prediction
Figure 3 for A Survey on Label-efficient Deep Segmentation: Bridging the Gap between Weak Supervision and Dense Prediction
Figure 4 for A Survey on Label-efficient Deep Segmentation: Bridging the Gap between Weak Supervision and Dense Prediction
Viaarxiv icon

Isolating and Leveraging Controllable and Noncontrollable Visual Dynamics in World Models

Add code
Bookmark button
Alert button
May 27, 2022
Minting Pan, Xiangming Zhu, Yunbo Wang, Xiaokang Yang

Figure 1 for Isolating and Leveraging Controllable and Noncontrollable Visual Dynamics in World Models
Figure 2 for Isolating and Leveraging Controllable and Noncontrollable Visual Dynamics in World Models
Figure 3 for Isolating and Leveraging Controllable and Noncontrollable Visual Dynamics in World Models
Figure 4 for Isolating and Leveraging Controllable and Noncontrollable Visual Dynamics in World Models
Viaarxiv icon

DOTIN: Dropping Task-Irrelevant Nodes for GNNs

Add code
Bookmark button
Alert button
Apr 28, 2022
Shaofeng Zhang, Feng Zhu, Junchi Yan, Rui Zhao, Xiaokang Yang

Figure 1 for DOTIN: Dropping Task-Irrelevant Nodes for GNNs
Figure 2 for DOTIN: Dropping Task-Irrelevant Nodes for GNNs
Figure 3 for DOTIN: Dropping Task-Irrelevant Nodes for GNNs
Figure 4 for DOTIN: Dropping Task-Irrelevant Nodes for GNNs
Viaarxiv icon

Continual Predictive Learning from Videos

Add code
Bookmark button
Alert button
Apr 12, 2022
Geng Chen, Wendong Zhang, Han Lu, Siyu Gao, Yunbo Wang, Mingsheng Long, Xiaokang Yang

Figure 1 for Continual Predictive Learning from Videos
Figure 2 for Continual Predictive Learning from Videos
Figure 3 for Continual Predictive Learning from Videos
Figure 4 for Continual Predictive Learning from Videos
Viaarxiv icon

Confusing Image Quality Assessment: Towards Better Augmented Reality Experience

Add code
Bookmark button
Alert button
Apr 11, 2022
Huiyu Duan, Xiongkuo Min, Yucheng Zhu, Guangtao Zhai, Xiaokang Yang, Patrick Le Callet

Figure 1 for Confusing Image Quality Assessment: Towards Better Augmented Reality Experience
Figure 2 for Confusing Image Quality Assessment: Towards Better Augmented Reality Experience
Figure 3 for Confusing Image Quality Assessment: Towards Better Augmented Reality Experience
Figure 4 for Confusing Image Quality Assessment: Towards Better Augmented Reality Experience
Viaarxiv icon

Modeling Dynamic User Preference via Dictionary Learning for Sequential Recommendation

Add code
Bookmark button
Alert button
Apr 02, 2022
Chao Chen, Dongsheng Li, Junchi Yan, Xiaokang Yang

Figure 1 for Modeling Dynamic User Preference via Dictionary Learning for Sequential Recommendation
Figure 2 for Modeling Dynamic User Preference via Dictionary Learning for Sequential Recommendation
Figure 3 for Modeling Dynamic User Preference via Dictionary Learning for Sequential Recommendation
Figure 4 for Modeling Dynamic User Preference via Dictionary Learning for Sequential Recommendation
Viaarxiv icon