Alert button
Picture for Zhiying Jiang

Zhiying Jiang

Alert button

A Theory of Human-Like Few-Shot Learning

Add code
Bookmark button
Alert button
Jan 03, 2023
Zhiying Jiang, Rui Wang, Dongbo Bu, Ming Li

Figure 1 for A Theory of Human-Like Few-Shot Learning
Figure 2 for A Theory of Human-Like Few-Shot Learning
Figure 3 for A Theory of Human-Like Few-Shot Learning
Figure 4 for A Theory of Human-Like Few-Shot Learning
Viaarxiv icon

Less is More: Parameter-Free Text Classification with Gzip

Add code
Bookmark button
Alert button
Dec 19, 2022
Zhiying Jiang, Matthew Y. R. Yang, Mikhail Tsirlin, Raphael Tang, Jimmy Lin

Figure 1 for Less is More: Parameter-Free Text Classification with Gzip
Figure 2 for Less is More: Parameter-Free Text Classification with Gzip
Figure 3 for Less is More: Parameter-Free Text Classification with Gzip
Figure 4 for Less is More: Parameter-Free Text Classification with Gzip
Viaarxiv icon

What the DAAM: Interpreting Stable Diffusion Using Cross Attention

Add code
Bookmark button
Alert button
Oct 11, 2022
Raphael Tang, Akshat Pandey, Zhiying Jiang, Gefei Yang, Karun Kumar, Jimmy Lin, Ferhan Ture

Figure 1 for What the DAAM: Interpreting Stable Diffusion Using Cross Attention
Figure 2 for What the DAAM: Interpreting Stable Diffusion Using Cross Attention
Figure 3 for What the DAAM: Interpreting Stable Diffusion Using Cross Attention
Figure 4 for What the DAAM: Interpreting Stable Diffusion Using Cross Attention
Viaarxiv icon

Building an Efficiency Pipeline: Commutativity and Cumulativeness of Efficiency Operators for Transformers

Add code
Bookmark button
Alert button
Jul 31, 2022
Ji Xin, Raphael Tang, Zhiying Jiang, Yaoliang Yu, Jimmy Lin

Figure 1 for Building an Efficiency Pipeline: Commutativity and Cumulativeness of Efficiency Operators for Transformers
Figure 2 for Building an Efficiency Pipeline: Commutativity and Cumulativeness of Efficiency Operators for Transformers
Figure 3 for Building an Efficiency Pipeline: Commutativity and Cumulativeness of Efficiency Operators for Transformers
Figure 4 for Building an Efficiency Pipeline: Commutativity and Cumulativeness of Efficiency Operators for Transformers
Viaarxiv icon

Few-Shot Non-Parametric Learning with Deep Latent Variable Model

Add code
Bookmark button
Alert button
Jun 23, 2022
Zhiying Jiang, Yiqin Dai, Ji Xin, Ming Li, Jimmy Lin

Figure 1 for Few-Shot Non-Parametric Learning with Deep Latent Variable Model
Figure 2 for Few-Shot Non-Parametric Learning with Deep Latent Variable Model
Figure 3 for Few-Shot Non-Parametric Learning with Deep Latent Variable Model
Figure 4 for Few-Shot Non-Parametric Learning with Deep Latent Variable Model
Viaarxiv icon

Investigating the Limitations of Transformers with Simple Arithmetic Tasks

Add code
Bookmark button
Alert button
Mar 02, 2021
Rodrigo Nogueira, Zhiying Jiang, Jimmy Lin

Figure 1 for Investigating the Limitations of Transformers with Simple Arithmetic Tasks
Figure 2 for Investigating the Limitations of Transformers with Simple Arithmetic Tasks
Figure 3 for Investigating the Limitations of Transformers with Simple Arithmetic Tasks
Figure 4 for Investigating the Limitations of Transformers with Simple Arithmetic Tasks
Viaarxiv icon

Investigating the Limitations of the Transformers with Simple Arithmetic Tasks

Add code
Bookmark button
Alert button
Feb 25, 2021
Rodrigo Nogueira, Zhiying Jiang, Jimmy Lin

Figure 1 for Investigating the Limitations of the Transformers with Simple Arithmetic Tasks
Figure 2 for Investigating the Limitations of the Transformers with Simple Arithmetic Tasks
Figure 3 for Investigating the Limitations of the Transformers with Simple Arithmetic Tasks
Figure 4 for Investigating the Limitations of the Transformers with Simple Arithmetic Tasks
Viaarxiv icon

Inserting Information Bottlenecks for Attribution in Transformers

Add code
Bookmark button
Alert button
Dec 27, 2020
Zhiying Jiang, Raphael Tang, Ji Xin, Jimmy Lin

Figure 1 for Inserting Information Bottlenecks for Attribution in Transformers
Figure 2 for Inserting Information Bottlenecks for Attribution in Transformers
Figure 3 for Inserting Information Bottlenecks for Attribution in Transformers
Figure 4 for Inserting Information Bottlenecks for Attribution in Transformers
Viaarxiv icon

Document Ranking with a Pretrained Sequence-to-Sequence Model

Add code
Bookmark button
Alert button
Mar 14, 2020
Rodrigo Nogueira, Zhiying Jiang, Jimmy Lin

Figure 1 for Document Ranking with a Pretrained Sequence-to-Sequence Model
Figure 2 for Document Ranking with a Pretrained Sequence-to-Sequence Model
Figure 3 for Document Ranking with a Pretrained Sequence-to-Sequence Model
Figure 4 for Document Ranking with a Pretrained Sequence-to-Sequence Model
Viaarxiv icon

PaperRobot: Incremental Draft Generation of Scientific Ideas

Add code
Bookmark button
Alert button
May 31, 2019
Qingyun Wang, Lifu Huang, Zhiying Jiang, Kevin Knight, Heng Ji, Mohit Bansal, Yi Luan

Figure 1 for PaperRobot: Incremental Draft Generation of Scientific Ideas
Viaarxiv icon