Alert button
Picture for Yu Qiao

Yu Qiao

Alert button

MANTIS at TSAR-2022 Shared Task: Improved Unsupervised Lexical Simplification with Pretrained Encoders

Dec 19, 2022
Xiaofei Li, Daniel Wiechmann, Yu Qiao, Elma Kerz

Figure 1 for MANTIS at TSAR-2022 Shared Task: Improved Unsupervised Lexical Simplification with Pretrained Encoders
Figure 2 for MANTIS at TSAR-2022 Shared Task: Improved Unsupervised Lexical Simplification with Pretrained Encoders
Figure 3 for MANTIS at TSAR-2022 Shared Task: Improved Unsupervised Lexical Simplification with Pretrained Encoders
Figure 4 for MANTIS at TSAR-2022 Shared Task: Improved Unsupervised Lexical Simplification with Pretrained Encoders
Viaarxiv icon

(Psycho-)Linguistic Features Meet Transformer Models for Improved Explainable and Controllable Text Simplification

Dec 19, 2022
Yu Qiao, Xiaofei Li, Daniel Wiechmann, Elma Kerz

Figure 1 for (Psycho-)Linguistic Features Meet Transformer Models for Improved Explainable and Controllable Text Simplification
Figure 2 for (Psycho-)Linguistic Features Meet Transformer Models for Improved Explainable and Controllable Text Simplification
Figure 3 for (Psycho-)Linguistic Features Meet Transformer Models for Improved Explainable and Controllable Text Simplification
Figure 4 for (Psycho-)Linguistic Features Meet Transformer Models for Improved Explainable and Controllable Text Simplification
Viaarxiv icon

Exploring Hybrid and Ensemble Models for Multiclass Prediction of Mental Health Status on Social Media

Dec 19, 2022
Sourabh Zanwar, Daniel Wiechmann, Yu Qiao, Elma Kerz

Figure 1 for Exploring Hybrid and Ensemble Models for Multiclass Prediction of Mental Health Status on Social Media
Figure 2 for Exploring Hybrid and Ensemble Models for Multiclass Prediction of Mental Health Status on Social Media
Figure 3 for Exploring Hybrid and Ensemble Models for Multiclass Prediction of Mental Health Status on Social Media
Figure 4 for Exploring Hybrid and Ensemble Models for Multiclass Prediction of Mental Health Status on Social Media
Viaarxiv icon

Improving the Generalizability of Text-Based Emotion Detection by Leveraging Transformers with Psycholinguistic Features

Dec 19, 2022
Sourabh Zanwar, Daniel Wiechmann, Yu Qiao, Elma Kerz

Figure 1 for Improving the Generalizability of Text-Based Emotion Detection by Leveraging Transformers with Psycholinguistic Features
Figure 2 for Improving the Generalizability of Text-Based Emotion Detection by Leveraging Transformers with Psycholinguistic Features
Figure 3 for Improving the Generalizability of Text-Based Emotion Detection by Leveraging Transformers with Psycholinguistic Features
Figure 4 for Improving the Generalizability of Text-Based Emotion Detection by Leveraging Transformers with Psycholinguistic Features
Viaarxiv icon

Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders

Dec 13, 2022
Renrui Zhang, Liuhui Wang, Yu Qiao, Peng Gao, Hongsheng Li

Figure 1 for Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders
Figure 2 for Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders
Figure 3 for Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders
Figure 4 for Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders
Viaarxiv icon

Diff-Font: Diffusion Model for Robust One-Shot Font Generation

Dec 12, 2022
Haibin He, Xinyuan Chen, Chaoyue Wang, Juhua Liu, Bo Du, Dacheng Tao, Yu Qiao

Figure 1 for Diff-Font: Diffusion Model for Robust One-Shot Font Generation
Figure 2 for Diff-Font: Diffusion Model for Robust One-Shot Font Generation
Figure 3 for Diff-Font: Diffusion Model for Robust One-Shot Font Generation
Figure 4 for Diff-Font: Diffusion Model for Robust One-Shot Font Generation
Viaarxiv icon

InternVideo: General Video Foundation Models via Generative and Discriminative Learning

Dec 07, 2022
Yi Wang, Kunchang Li, Yizhuo Li, Yinan He, Bingkun Huang, Zhiyu Zhao, Hongjie Zhang, Jilan Xu, Yi Liu, Zun Wang, Sen Xing, Guo Chen, Junting Pan, Jiashuo Yu, Yali Wang, Limin Wang, Yu Qiao

Figure 1 for InternVideo: General Video Foundation Models via Generative and Discriminative Learning
Figure 2 for InternVideo: General Video Foundation Models via Generative and Discriminative Learning
Figure 3 for InternVideo: General Video Foundation Models via Generative and Discriminative Learning
Figure 4 for InternVideo: General Video Foundation Models via Generative and Discriminative Learning
Viaarxiv icon

Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE

Dec 04, 2022
Qihuang Zhong, Liang Ding, Yibing Zhan, Yu Qiao, Yonggang Wen, Li Shen, Juhua Liu, Baosheng Yu, Bo Du, Yixin Chen, Xinbo Gao, Chunyan Miao, Xiaoou Tang, Dacheng Tao

Figure 1 for Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
Figure 2 for Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
Figure 3 for Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
Figure 4 for Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
Viaarxiv icon

Improving Training and Inference of Face Recognition Models via Random Temperature Scaling

Dec 02, 2022
Lei Shang, Mouxiao Huang, Wu Shi, Yuchen Liu, Yang Liu, Fei Wang, Baigui Sun, Xuansong Xie, Yu Qiao

Figure 1 for Improving Training and Inference of Face Recognition Models via Random Temperature Scaling
Figure 2 for Improving Training and Inference of Face Recognition Models via Random Temperature Scaling
Figure 3 for Improving Training and Inference of Face Recognition Models via Random Temperature Scaling
Figure 4 for Improving Training and Inference of Face Recognition Models via Random Temperature Scaling
Viaarxiv icon

ResFormer: Scaling ViTs with Multi-Resolution Training

Dec 01, 2022
Rui Tian, Zuxuan Wu, Qi Dai, Han Hu, Yu Qiao, Yu-Gang Jiang

Figure 1 for ResFormer: Scaling ViTs with Multi-Resolution Training
Figure 2 for ResFormer: Scaling ViTs with Multi-Resolution Training
Figure 3 for ResFormer: Scaling ViTs with Multi-Resolution Training
Figure 4 for ResFormer: Scaling ViTs with Multi-Resolution Training
Viaarxiv icon