Alert button
Picture for Yibing Zhan

Yibing Zhan

Alert button

Chasing Consistency in Text-to-3D Generation from a Single Image

Add code
Bookmark button
Alert button
Sep 07, 2023
Yichen Ouyang, Wenhao Chai, Jiayi Ye, Dapeng Tao, Yibing Zhan, Gaoang Wang

Figure 1 for Chasing Consistency in Text-to-3D Generation from a Single Image
Figure 2 for Chasing Consistency in Text-to-3D Generation from a Single Image
Figure 3 for Chasing Consistency in Text-to-3D Generation from a Single Image
Figure 4 for Chasing Consistency in Text-to-3D Generation from a Single Image
Viaarxiv icon

Free-Form Composition Networks for Egocentric Action Recognition

Add code
Bookmark button
Alert button
Jul 13, 2023
Haoran Wang, Qinghua Cheng, Baosheng Yu, Yibing Zhan, Dapeng Tao, Liang Ding, Haibin Ling

Figure 1 for Free-Form Composition Networks for Egocentric Action Recognition
Figure 2 for Free-Form Composition Networks for Egocentric Action Recognition
Figure 3 for Free-Form Composition Networks for Egocentric Action Recognition
Figure 4 for Free-Form Composition Networks for Egocentric Action Recognition
Viaarxiv icon

On Exploring Node-feature and Graph-structure Diversities for Node Drop Graph Pooling

Add code
Bookmark button
Alert button
Jun 22, 2023
Chuang Liu, Yibing Zhan, Baosheng Yu, Liu Liu, Bo Du, Wenbin Hu, Tongliang Liu

Figure 1 for On Exploring Node-feature and Graph-structure Diversities for Node Drop Graph Pooling
Figure 2 for On Exploring Node-feature and Graph-structure Diversities for Node Drop Graph Pooling
Figure 3 for On Exploring Node-feature and Graph-structure Diversities for Node Drop Graph Pooling
Figure 4 for On Exploring Node-feature and Graph-structure Diversities for Node Drop Graph Pooling
Viaarxiv icon

Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts for Zero-Shot Dialogue State Tracking

Add code
Bookmark button
Alert button
Jun 01, 2023
Qingyue Wang, Liang Ding, Yanan Cao, Yibing Zhan, Zheng Lin, Shi Wang, Dacheng Tao, Li Guo

Figure 1 for Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts for Zero-Shot Dialogue State Tracking
Figure 2 for Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts for Zero-Shot Dialogue State Tracking
Figure 3 for Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts for Zero-Shot Dialogue State Tracking
Figure 4 for Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts for Zero-Shot Dialogue State Tracking
Viaarxiv icon

Noise-Resistant Multimodal Transformer for Emotion Recognition

Add code
Bookmark button
Alert button
May 04, 2023
Yuanyuan Liu, Haoyu Zhang, Yibing Zhan, Zijing Chen, Guanghao Yin, Lin Wei, Zhe Chen

Figure 1 for Noise-Resistant Multimodal Transformer for Emotion Recognition
Figure 2 for Noise-Resistant Multimodal Transformer for Emotion Recognition
Figure 3 for Noise-Resistant Multimodal Transformer for Emotion Recognition
Figure 4 for Noise-Resistant Multimodal Transformer for Emotion Recognition
Viaarxiv icon

Token Contrast for Weakly-Supervised Semantic Segmentation

Add code
Bookmark button
Alert button
Mar 02, 2023
Lixiang Ru, Heliang Zheng, Yibing Zhan, Bo Du

Figure 1 for Token Contrast for Weakly-Supervised Semantic Segmentation
Figure 2 for Token Contrast for Weakly-Supervised Semantic Segmentation
Figure 3 for Token Contrast for Weakly-Supervised Semantic Segmentation
Figure 4 for Token Contrast for Weakly-Supervised Semantic Segmentation
Viaarxiv icon

OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System

Add code
Bookmark button
Alert button
Mar 01, 2023
Chao Xue, Wei Liu, Shuai Xie, Zhenfang Wang, Jiaxing Li, Xuyang Peng, Liang Ding, Shanshan Zhao, Qiong Cao, Yibo Yang, Fengxiang He, Bohua Cai, Rongcheng Bian, Yiyan Zhao, Heliang Zheng, Xiangyang Liu, Dongkai Liu, Daqing Liu, Li Shen, Chang Li, Shijin Zhang, Yukang Zhang, Guanpu Chen, Shixiang Chen, Yibing Zhan, Jing Zhang, Chaoyue Wang, Dacheng Tao

Figure 1 for OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
Figure 2 for OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
Figure 3 for OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
Figure 4 for OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
Viaarxiv icon

Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE

Add code
Bookmark button
Alert button
Feb 18, 2023
Qihuang Zhong, Liang Ding, Keqin Peng, Juhua Liu, Bo Du, Li Shen, Yibing Zhan, Dacheng Tao

Figure 1 for Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE
Figure 2 for Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE
Figure 3 for Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE
Figure 4 for Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE
Viaarxiv icon

DIVOTrack: A Novel Dataset and Baseline Method for Cross-View Multi-Object Tracking in DIVerse Open Scenes

Add code
Bookmark button
Alert button
Feb 15, 2023
Shenghao Hao, Peiyuan Liu, Yibing Zhan, Kaixun Jin, Zuozhu Liu, Mingli Song, Jenq-Neng Hwang, Gaoang Wang

Figure 1 for DIVOTrack: A Novel Dataset and Baseline Method for Cross-View Multi-Object Tracking in DIVerse Open Scenes
Figure 2 for DIVOTrack: A Novel Dataset and Baseline Method for Cross-View Multi-Object Tracking in DIVerse Open Scenes
Figure 3 for DIVOTrack: A Novel Dataset and Baseline Method for Cross-View Multi-Object Tracking in DIVerse Open Scenes
Figure 4 for DIVOTrack: A Novel Dataset and Baseline Method for Cross-View Multi-Object Tracking in DIVerse Open Scenes
Viaarxiv icon

Original or Translated? On the Use of Parallel Data for Translation Quality Estimation

Add code
Bookmark button
Alert button
Dec 20, 2022
Baopu Qiu, Liang Ding, Di Wu, Lin Shang, Yibing Zhan, Dacheng Tao

Figure 1 for Original or Translated? On the Use of Parallel Data for Translation Quality Estimation
Figure 2 for Original or Translated? On the Use of Parallel Data for Translation Quality Estimation
Figure 3 for Original or Translated? On the Use of Parallel Data for Translation Quality Estimation
Figure 4 for Original or Translated? On the Use of Parallel Data for Translation Quality Estimation
Viaarxiv icon