Alert button
Picture for Ting-Han Fan

Ting-Han Fan

Alert button

Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation

Add code
Bookmark button
Alert button
Nov 15, 2023
Ta-Chung Chi, Ting-Han Fan, Alexander I. Rudnicky

Viaarxiv icon

Advancing Regular Language Reasoning in Linear Recurrent Neural Networks

Add code
Bookmark button
Alert button
Sep 14, 2023
Ting-Han Fan, Ta-Chung Chi, Alexander I. Rudnicky

Viaarxiv icon

Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings

Add code
Bookmark button
Alert button
May 23, 2023
Ta-Chung Chi, Ting-Han Fan, Li-Wei Chen, Alexander I. Rudnicky, Peter J. Ramadge

Figure 1 for Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings
Figure 2 for Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings
Figure 3 for Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings
Figure 4 for Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings
Viaarxiv icon

Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation

Add code
Bookmark button
Alert button
May 05, 2023
Ta-Chung Chi, Ting-Han Fan, Alexander I. Rudnicky, Peter J. Ramadge

Figure 1 for Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation
Figure 2 for Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation
Figure 3 for Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation
Figure 4 for Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation
Viaarxiv icon

Receptive Field Alignment Enables Transformer Length Extrapolation

Add code
Bookmark button
Alert button
Dec 20, 2022
Ta-Chung Chi, Ting-Han Fan, Alexander I. Rudnicky

Figure 1 for Receptive Field Alignment Enables Transformer Length Extrapolation
Figure 2 for Receptive Field Alignment Enables Transformer Length Extrapolation
Figure 3 for Receptive Field Alignment Enables Transformer Length Extrapolation
Figure 4 for Receptive Field Alignment Enables Transformer Length Extrapolation
Viaarxiv icon

Training Discrete Deep Generative Models via Gapped Straight-Through Estimator

Add code
Bookmark button
Alert button
Jun 15, 2022
Ting-Han Fan, Ta-Chung Chi, Alexander I. Rudnicky, Peter J. Ramadge

Figure 1 for Training Discrete Deep Generative Models via Gapped Straight-Through Estimator
Figure 2 for Training Discrete Deep Generative Models via Gapped Straight-Through Estimator
Figure 3 for Training Discrete Deep Generative Models via Gapped Straight-Through Estimator
Figure 4 for Training Discrete Deep Generative Models via Gapped Straight-Through Estimator
Viaarxiv icon

KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation

Add code
Bookmark button
Alert button
May 20, 2022
Ta-Chung Chi, Ting-Han Fan, Peter J. Ramadge, Alexander I. Rudnicky

Figure 1 for KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation
Figure 2 for KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation
Figure 3 for KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation
Figure 4 for KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation
Viaarxiv icon

Explaining Off-Policy Actor-Critic From A Bias-Variance Perspective

Add code
Bookmark button
Alert button
Oct 06, 2021
Ting-Han Fan, Peter J. Ramadge

Figure 1 for Explaining Off-Policy Actor-Critic From A Bias-Variance Perspective
Figure 2 for Explaining Off-Policy Actor-Critic From A Bias-Variance Perspective
Figure 3 for Explaining Off-Policy Actor-Critic From A Bias-Variance Perspective
Figure 4 for Explaining Off-Policy Actor-Critic From A Bias-Variance Perspective
Viaarxiv icon

PowerGym: A Reinforcement Learning Environment for Volt-Var Control in Power Distribution Systems

Add code
Bookmark button
Alert button
Sep 20, 2021
Ting-Han Fan, Xian Yeow Lee, Yubo Wang

Figure 1 for PowerGym: A Reinforcement Learning Environment for Volt-Var Control in Power Distribution Systems
Figure 2 for PowerGym: A Reinforcement Learning Environment for Volt-Var Control in Power Distribution Systems
Figure 3 for PowerGym: A Reinforcement Learning Environment for Volt-Var Control in Power Distribution Systems
Figure 4 for PowerGym: A Reinforcement Learning Environment for Volt-Var Control in Power Distribution Systems
Viaarxiv icon