Alert button
Picture for Zhengyu Chen

Zhengyu Chen

Alert button

Learning to Reweight for Graph Neural Network

Dec 19, 2023
Zhengyu Chen, Teng Xiao, Kun Kuang, Zheqi Lv, Min Zhang, Jinluan Yang, Chengqiang Lu, Hongxia Yang, Fei Wu

Viaarxiv icon

Simple and Asymmetric Graph Contrastive Learning without Augmentations

Oct 29, 2023
Teng Xiao, Huaisheng Zhu, Zhengyu Chen, Suhang Wang

Viaarxiv icon

SwG-former: Sliding-window Graph Convolutional Network Integrated with Conformer for Sound Event Localization and Detection

Oct 21, 2023
Weiming Huang, Qinghua Huang, Liyan Ma, Zhengyu Chen, Chuan Wang

Viaarxiv icon

Let Models Speak Ciphers: Multiagent Debate through Embeddings

Oct 10, 2023
Chau Pham, Boyi Liu, Yingxiang Yang, Zhengyu Chen, Tianyi Liu, Jianbo Yuan, Bryan A. Plummer, Zhaoran Wang, Hongxia Yang

Figure 1 for Let Models Speak Ciphers: Multiagent Debate through Embeddings
Figure 2 for Let Models Speak Ciphers: Multiagent Debate through Embeddings
Figure 3 for Let Models Speak Ciphers: Multiagent Debate through Embeddings
Figure 4 for Let Models Speak Ciphers: Multiagent Debate through Embeddings
Viaarxiv icon

Learning How to Propagate Messages in Graph Neural Networks

Oct 01, 2023
Teng Xiao, Zhengyu Chen, Donglin Wang, Suhang Wang

Figure 1 for Learning How to Propagate Messages in Graph Neural Networks
Figure 2 for Learning How to Propagate Messages in Graph Neural Networks
Figure 3 for Learning How to Propagate Messages in Graph Neural Networks
Figure 4 for Learning How to Propagate Messages in Graph Neural Networks
Viaarxiv icon

PIE: Simulating Disease Progression via Progressive Image Editing

Sep 21, 2023
Kaizhao Liang, Xu Cao, Kuei-Da Liao, Tianren Gao, Zhengyu Chen, Tejas Nama

Figure 1 for PIE: Simulating Disease Progression via Progressive Image Editing
Figure 2 for PIE: Simulating Disease Progression via Progressive Image Editing
Figure 3 for PIE: Simulating Disease Progression via Progressive Image Editing
Figure 4 for PIE: Simulating Disease Progression via Progressive Image Editing
Viaarxiv icon

On the Tool Manipulation Capability of Open-source Large Language Models

May 25, 2023
Qiantong Xu, Fenglu Hong, Bo Li, Changran Hu, Zhengyu Chen, Jian Zhang

Figure 1 for On the Tool Manipulation Capability of Open-source Large Language Models
Figure 2 for On the Tool Manipulation Capability of Open-source Large Language Models
Figure 3 for On the Tool Manipulation Capability of Open-source Large Language Models
Figure 4 for On the Tool Manipulation Capability of Open-source Large Language Models
Viaarxiv icon

IDEAL: Toward High-efficiency Device-Cloud Collaborative and Dynamic Recommendation System

Feb 14, 2023
Zheqi Lv, Zhengyu Chen, Shengyu Zhang, Kun Kuang, Wenqiao Zhang, Mengze Li, Beng Chin Ooi, Fei Wu

Figure 1 for IDEAL: Toward High-efficiency Device-Cloud Collaborative and Dynamic Recommendation System
Figure 2 for IDEAL: Toward High-efficiency Device-Cloud Collaborative and Dynamic Recommendation System
Figure 3 for IDEAL: Toward High-efficiency Device-Cloud Collaborative and Dynamic Recommendation System
Figure 4 for IDEAL: Toward High-efficiency Device-Cloud Collaborative and Dynamic Recommendation System
Viaarxiv icon

MetaNetwork: A Task-agnostic Network Parameters Generation Framework for Improving Device Model Generalization

Sep 12, 2022
Zheqi Lv, Feng Wang, Kun Kuang, Yongwei Wang, Zhengyu Chen, Tao Shen, Hongxia Yang, Fei Wu

Figure 1 for MetaNetwork: A Task-agnostic Network Parameters Generation Framework for Improving Device Model Generalization
Figure 2 for MetaNetwork: A Task-agnostic Network Parameters Generation Framework for Improving Device Model Generalization
Figure 3 for MetaNetwork: A Task-agnostic Network Parameters Generation Framework for Improving Device Model Generalization
Figure 4 for MetaNetwork: A Task-agnostic Network Parameters Generation Framework for Improving Device Model Generalization
Viaarxiv icon

Knowledge Distillation of Transformer-based Language Models Revisited

Jun 30, 2022
Chengqiang Lu, Jianwei Zhang, Yunfei Chu, Zhengyu Chen, Jingren Zhou, Fei Wu, Haiqing Chen, Hongxia Yang

Figure 1 for Knowledge Distillation of Transformer-based Language Models Revisited
Figure 2 for Knowledge Distillation of Transformer-based Language Models Revisited
Figure 3 for Knowledge Distillation of Transformer-based Language Models Revisited
Figure 4 for Knowledge Distillation of Transformer-based Language Models Revisited
Viaarxiv icon