Alert button
Picture for Lei Cheng

Lei Cheng

Alert button

To Fold or Not to Fold: Graph Regularized Tensor Train for Visual Data Completion

Jun 19, 2023
Le Xu, Lei Cheng, Ngai Wong, Yik-Chung Wu

Figure 1 for To Fold or Not to Fold: Graph Regularized Tensor Train for Visual Data Completion
Figure 2 for To Fold or Not to Fold: Graph Regularized Tensor Train for Visual Data Completion
Figure 3 for To Fold or Not to Fold: Graph Regularized Tensor Train for Visual Data Completion
Figure 4 for To Fold or Not to Fold: Graph Regularized Tensor Train for Visual Data Completion
Viaarxiv icon

Can the Inference Logic of Large Language Models be Disentangled into Symbolic Concepts?

Apr 03, 2023
Wen Shen, Lei Cheng, Yuxiao Yang, Mingjie Li, Quanshi Zhang

Figure 1 for Can the Inference Logic of Large Language Models be Disentangled into Symbolic Concepts?
Figure 2 for Can the Inference Logic of Large Language Models be Disentangled into Symbolic Concepts?
Figure 3 for Can the Inference Logic of Large Language Models be Disentangled into Symbolic Concepts?
Figure 4 for Can the Inference Logic of Large Language Models be Disentangled into Symbolic Concepts?
Viaarxiv icon

GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning

Jan 31, 2023
Xin Dong, Ruize Wu, Chao Xiong, Hai Li, Lei Cheng, Yong He, Shiyou Qian, Jian Cao, Linjian Mo

Figure 1 for GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning
Figure 2 for GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning
Figure 3 for GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning
Figure 4 for GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning
Viaarxiv icon

Output-Dependent Gaussian Process State-Space Model

Dec 15, 2022
Zhidi Lin, Lei Cheng, Feng Yin, Lexi Xu, Shuguang Cui

Figure 1 for Output-Dependent Gaussian Process State-Space Model
Figure 2 for Output-Dependent Gaussian Process State-Space Model
Figure 3 for Output-Dependent Gaussian Process State-Space Model
Figure 4 for Output-Dependent Gaussian Process State-Space Model
Viaarxiv icon

ChordMixer: A Scalable Neural Attention Model for Sequences with Different Lengths

Jun 12, 2022
Ruslan Khalitov, Tong Yu, Lei Cheng, Zhirong Yang

Figure 1 for ChordMixer: A Scalable Neural Attention Model for Sequences with Different Lengths
Figure 2 for ChordMixer: A Scalable Neural Attention Model for Sequences with Different Lengths
Figure 3 for ChordMixer: A Scalable Neural Attention Model for Sequences with Different Lengths
Figure 4 for ChordMixer: A Scalable Neural Attention Model for Sequences with Different Lengths
Viaarxiv icon

Reconfigurable Intelligent Surface-Aided 6G Massive Access: Coupled Tensor Modeling and Sparse Bayesian Learning

Jun 11, 2022
Xiaodan Shao, Lei Cheng, Xiaoming Chen, Chongwen Huang, Derrick Wing Kwan Ng

Figure 1 for Reconfigurable Intelligent Surface-Aided 6G Massive Access: Coupled Tensor Modeling and Sparse Bayesian Learning
Figure 2 for Reconfigurable Intelligent Surface-Aided 6G Massive Access: Coupled Tensor Modeling and Sparse Bayesian Learning
Figure 3 for Reconfigurable Intelligent Surface-Aided 6G Massive Access: Coupled Tensor Modeling and Sparse Bayesian Learning
Figure 4 for Reconfigurable Intelligent Surface-Aided 6G Massive Access: Coupled Tensor Modeling and Sparse Bayesian Learning
Viaarxiv icon

Rethinking Bayesian Learning for Data Analysis: The Art of Prior and Inference in Sparsity-Aware Modeling

May 28, 2022
Lei Cheng, Feng Yin, Sergios Theodoridis, Sotirios Chatzis, Tsung-Hui Chang

Figure 1 for Rethinking Bayesian Learning for Data Analysis: The Art of Prior and Inference in Sparsity-Aware Modeling
Figure 2 for Rethinking Bayesian Learning for Data Analysis: The Art of Prior and Inference in Sparsity-Aware Modeling
Figure 3 for Rethinking Bayesian Learning for Data Analysis: The Art of Prior and Inference in Sparsity-Aware Modeling
Figure 4 for Rethinking Bayesian Learning for Data Analysis: The Art of Prior and Inference in Sparsity-Aware Modeling
Viaarxiv icon

Paramixer: Parameterizing Mixing Links in Sparse Factors Works Better than Dot-Product Self-Attention

Apr 22, 2022
Tong Yu, Ruslan Khalitov, Lei Cheng, Zhirong Yang

Figure 1 for Paramixer: Parameterizing Mixing Links in Sparse Factors Works Better than Dot-Product Self-Attention
Figure 2 for Paramixer: Parameterizing Mixing Links in Sparse Factors Works Better than Dot-Product Self-Attention
Figure 3 for Paramixer: Parameterizing Mixing Links in Sparse Factors Works Better than Dot-Product Self-Attention
Figure 4 for Paramixer: Parameterizing Mixing Links in Sparse Factors Works Better than Dot-Product Self-Attention
Viaarxiv icon

Downlink Channel Covariance Matrix Reconstruction for FDD Massive MIMO Systems with Limited Feedback

Apr 02, 2022
Kai Li, Ying Li, Lei Cheng, Qingjiang Shi, Zhi-Quan Luo

Figure 1 for Downlink Channel Covariance Matrix Reconstruction for FDD Massive MIMO Systems with Limited Feedback
Figure 2 for Downlink Channel Covariance Matrix Reconstruction for FDD Massive MIMO Systems with Limited Feedback
Figure 3 for Downlink Channel Covariance Matrix Reconstruction for FDD Massive MIMO Systems with Limited Feedback
Figure 4 for Downlink Channel Covariance Matrix Reconstruction for FDD Massive MIMO Systems with Limited Feedback
Viaarxiv icon

Bayesian Low-rank Matrix Completion with Dual-graph Embedding: Prior Analysis and Tuning-free Inference

Mar 18, 2022
Yangge Chen, Lei Cheng, Yik-Chung Wu

Figure 1 for Bayesian Low-rank Matrix Completion with Dual-graph Embedding: Prior Analysis and Tuning-free Inference
Figure 2 for Bayesian Low-rank Matrix Completion with Dual-graph Embedding: Prior Analysis and Tuning-free Inference
Figure 3 for Bayesian Low-rank Matrix Completion with Dual-graph Embedding: Prior Analysis and Tuning-free Inference
Figure 4 for Bayesian Low-rank Matrix Completion with Dual-graph Embedding: Prior Analysis and Tuning-free Inference
Viaarxiv icon