Alert button
Picture for Alexander G. Huth

Alexander G. Huth

Alert button

The University of Texas at Austin

Humans and language models diverge when predicting repeating text

Oct 23, 2023
Aditya R. Vaidya, Javier Turek, Alexander G. Huth

Viaarxiv icon

Scaling laws for language encoding models in fMRI

May 22, 2023
Richard Antonello, Aditya Vaidya, Alexander G. Huth

Figure 1 for Scaling laws for language encoding models in fMRI
Figure 2 for Scaling laws for language encoding models in fMRI
Figure 3 for Scaling laws for language encoding models in fMRI
Figure 4 for Scaling laws for language encoding models in fMRI
Viaarxiv icon

Brain encoding models based on multimodal transformers can transfer across language and vision

May 20, 2023
Jerry Tang, Meng Du, Vy A. Vo, Vasudev Lal, Alexander G. Huth

Figure 1 for Brain encoding models based on multimodal transformers can transfer across language and vision
Figure 2 for Brain encoding models based on multimodal transformers can transfer across language and vision
Figure 3 for Brain encoding models based on multimodal transformers can transfer across language and vision
Figure 4 for Brain encoding models based on multimodal transformers can transfer across language and vision
Viaarxiv icon

Explaining black box text modules in natural language with language models

May 17, 2023
Chandan Singh, Aliyah R. Hsu, Richard Antonello, Shailee Jain, Alexander G. Huth, Bin Yu, Jianfeng Gao

Figure 1 for Explaining black box text modules in natural language with language models
Figure 2 for Explaining black box text modules in natural language with language models
Figure 3 for Explaining black box text modules in natural language with language models
Figure 4 for Explaining black box text modules in natural language with language models
Viaarxiv icon

Self-supervised models of audio effectively explain human cortical responses to speech

May 27, 2022
Aditya R. Vaidya, Shailee Jain, Alexander G. Huth

Figure 1 for Self-supervised models of audio effectively explain human cortical responses to speech
Figure 2 for Self-supervised models of audio effectively explain human cortical responses to speech
Figure 3 for Self-supervised models of audio effectively explain human cortical responses to speech
Figure 4 for Self-supervised models of audio effectively explain human cortical responses to speech
Viaarxiv icon

Physically Plausible Pose Refinement using Fully Differentiable Forces

May 17, 2021
Akarsh Kumar, Aditya R. Vaidya, Alexander G. Huth

Figure 1 for Physically Plausible Pose Refinement using Fully Differentiable Forces
Figure 2 for Physically Plausible Pose Refinement using Fully Differentiable Forces
Figure 3 for Physically Plausible Pose Refinement using Fully Differentiable Forces
Figure 4 for Physically Plausible Pose Refinement using Fully Differentiable Forces
Viaarxiv icon

Multi-timescale representation learning in LSTM Language Models

Sep 27, 2020
Shivangi Mahto, Vy A. Vo, Javier S. Turek, Alexander G. Huth

Figure 1 for Multi-timescale representation learning in LSTM Language Models
Figure 2 for Multi-timescale representation learning in LSTM Language Models
Figure 3 for Multi-timescale representation learning in LSTM Language Models
Figure 4 for Multi-timescale representation learning in LSTM Language Models
Viaarxiv icon

A single-layer RNN can approximate stacked and bidirectional RNNs, and topologies in between

Aug 30, 2019
Javier S. Turek, Shailee Jain, Mihai Capota, Alexander G. Huth, Theodore L. Willke

Figure 1 for A single-layer RNN can approximate stacked and bidirectional RNNs, and topologies in between
Figure 2 for A single-layer RNN can approximate stacked and bidirectional RNNs, and topologies in between
Figure 3 for A single-layer RNN can approximate stacked and bidirectional RNNs, and topologies in between
Figure 4 for A single-layer RNN can approximate stacked and bidirectional RNNs, and topologies in between
Viaarxiv icon