Alert button
Picture for Shanshan Zhong

Shanshan Zhong

Alert button

Mirror Gradient: Towards Robust Multimodal Recommender Systems via Exploring Flat Local Minima

Add code
Bookmark button
Alert button
Feb 17, 2024
Shanshan Zhong, Zhongzhan Huang, Daifeng Li, Wushao Wen, Jinghui Qin, Liang Lin

Viaarxiv icon

Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation

Add code
Bookmark button
Alert button
Dec 06, 2023
Shanshan Zhong, Zhongzhan Huang, Shanghua Gao, Wushao Wen, Liang Lin, Marinka Zitnik, Pan Zhou

Viaarxiv icon

Understanding Self-attention Mechanism via Dynamical System Perspective

Add code
Bookmark button
Alert button
Aug 19, 2023
Zhongzhan Huang, Mingfu Liang, Jinghui Qin, Shanshan Zhong, Liang Lin

Viaarxiv icon

SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language Models

Add code
Bookmark button
Alert button
May 12, 2023
Shanshan Zhong, Zhongzhan Huang, Wushao Wen, Jinghui Qin, Liang Lin

Figure 1 for SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language Models
Figure 2 for SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language Models
Figure 3 for SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language Models
Figure 4 for SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language Models
Viaarxiv icon

LSAS: Lightweight Sub-attention Strategy for Alleviating Attention Bias Problem

Add code
Bookmark button
Alert button
May 09, 2023
Shanshan Zhong, Wushao Wen, Jinghui Qin, Qiangpu Chen, Zhongzhan Huang

Figure 1 for LSAS: Lightweight Sub-attention Strategy for Alleviating Attention Bias Problem
Figure 2 for LSAS: Lightweight Sub-attention Strategy for Alleviating Attention Bias Problem
Figure 3 for LSAS: Lightweight Sub-attention Strategy for Alleviating Attention Bias Problem
Figure 4 for LSAS: Lightweight Sub-attention Strategy for Alleviating Attention Bias Problem
Viaarxiv icon

ASR: Attention-alike Structural Re-parameterization

Add code
Bookmark button
Alert button
Apr 13, 2023
Shanshan Zhong, Zhongzhan Huang, Wushao Wen, Jinghui Qin, Liang Lin

Figure 1 for ASR: Attention-alike Structural Re-parameterization
Figure 2 for ASR: Attention-alike Structural Re-parameterization
Figure 3 for ASR: Attention-alike Structural Re-parameterization
Figure 4 for ASR: Attention-alike Structural Re-parameterization
Viaarxiv icon

Deepening Neural Networks Implicitly and Locally via Recurrent Attention Strategy

Add code
Bookmark button
Alert button
Oct 27, 2022
Shanshan Zhong, Wushao Wen, Jinghui Qin, Zhongzhan Huang

Figure 1 for Deepening Neural Networks Implicitly and Locally via Recurrent Attention Strategy
Figure 2 for Deepening Neural Networks Implicitly and Locally via Recurrent Attention Strategy
Figure 3 for Deepening Neural Networks Implicitly and Locally via Recurrent Attention Strategy
Figure 4 for Deepening Neural Networks Implicitly and Locally via Recurrent Attention Strategy
Viaarxiv icon

Causal Inference for Chatting Handoff

Add code
Bookmark button
Alert button
Oct 06, 2022
Shanshan Zhong, Jinghui Qin, Zhongzhan Huang, Daifeng Li

Figure 1 for Causal Inference for Chatting Handoff
Figure 2 for Causal Inference for Chatting Handoff
Figure 3 for Causal Inference for Chatting Handoff
Figure 4 for Causal Inference for Chatting Handoff
Viaarxiv icon

Switchable Self-attention Module

Add code
Bookmark button
Alert button
Sep 13, 2022
Shanshan Zhong, Wushao Wen, Jinghui Qin

Figure 1 for Switchable Self-attention Module
Figure 2 for Switchable Self-attention Module
Figure 3 for Switchable Self-attention Module
Figure 4 for Switchable Self-attention Module
Viaarxiv icon

Mix-Pooling Strategy for Attention Mechanism

Add code
Bookmark button
Alert button
Aug 22, 2022
Shanshan Zhong, Wushao Wen, Jinghui Qin

Figure 1 for Mix-Pooling Strategy for Attention Mechanism
Figure 2 for Mix-Pooling Strategy for Attention Mechanism
Figure 3 for Mix-Pooling Strategy for Attention Mechanism
Figure 4 for Mix-Pooling Strategy for Attention Mechanism
Viaarxiv icon