Picture for Peter Bell

Peter Bell

Enhancing Human Pose Estimation in Ancient Vase Paintings via Perceptually-grounded Style Transfer Learning

Add code
Dec 10, 2020
Figure 1 for Enhancing Human Pose Estimation in Ancient Vase Paintings via Perceptually-grounded Style Transfer Learning
Figure 2 for Enhancing Human Pose Estimation in Ancient Vase Paintings via Perceptually-grounded Style Transfer Learning
Figure 3 for Enhancing Human Pose Estimation in Ancient Vase Paintings via Perceptually-grounded Style Transfer Learning
Figure 4 for Enhancing Human Pose Estimation in Ancient Vase Paintings via Perceptually-grounded Style Transfer Learning
Viaarxiv icon

On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers

Add code
Nov 08, 2020
Figure 1 for On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers
Figure 2 for On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers
Figure 3 for On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers
Figure 4 for On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers
Viaarxiv icon

Stochastic Attention Head Removal: A Simple and Effective Method for Improving Automatic Speech Recognition with Transformers

Add code
Nov 08, 2020
Figure 1 for Stochastic Attention Head Removal: A Simple and Effective Method for Improving Automatic Speech Recognition with Transformers
Figure 2 for Stochastic Attention Head Removal: A Simple and Effective Method for Improving Automatic Speech Recognition with Transformers
Figure 3 for Stochastic Attention Head Removal: A Simple and Effective Method for Improving Automatic Speech Recognition with Transformers
Figure 4 for Stochastic Attention Head Removal: A Simple and Effective Method for Improving Automatic Speech Recognition with Transformers
Viaarxiv icon

Leveraging speaker attribute information using multi task learning for speaker verification and diarization

Add code
Oct 27, 2020
Figure 1 for Leveraging speaker attribute information using multi task learning for speaker verification and diarization
Figure 2 for Leveraging speaker attribute information using multi task learning for speaker verification and diarization
Figure 3 for Leveraging speaker attribute information using multi task learning for speaker verification and diarization
Viaarxiv icon

Subtitles to Segmentation: Improving Low-Resource Speech-to-Text Translation Pipelines

Add code
Oct 19, 2020
Figure 1 for Subtitles to Segmentation: Improving Low-Resource Speech-to-Text Translation Pipelines
Figure 2 for Subtitles to Segmentation: Improving Low-Resource Speech-to-Text Translation Pipelines
Figure 3 for Subtitles to Segmentation: Improving Low-Resource Speech-to-Text Translation Pipelines
Figure 4 for Subtitles to Segmentation: Improving Low-Resource Speech-to-Text Translation Pipelines
Viaarxiv icon

Understanding Compositional Structures in Art Historical Images using Pose and Gaze Priors

Add code
Sep 08, 2020
Figure 1 for Understanding Compositional Structures in Art Historical Images using Pose and Gaze Priors
Figure 2 for Understanding Compositional Structures in Art Historical Images using Pose and Gaze Priors
Figure 3 for Understanding Compositional Structures in Art Historical Images using Pose and Gaze Priors
Figure 4 for Understanding Compositional Structures in Art Historical Images using Pose and Gaze Priors
Viaarxiv icon

Adaptation Algorithms for Speech Recognition: An Overview

Add code
Aug 14, 2020
Figure 1 for Adaptation Algorithms for Speech Recognition: An Overview
Figure 2 for Adaptation Algorithms for Speech Recognition: An Overview
Figure 3 for Adaptation Algorithms for Speech Recognition: An Overview
Figure 4 for Adaptation Algorithms for Speech Recognition: An Overview
Viaarxiv icon

When Can Self-Attention Be Replaced by Feed Forward Layers?

Add code
May 28, 2020
Figure 1 for When Can Self-Attention Be Replaced by Feed Forward Layers?
Figure 2 for When Can Self-Attention Be Replaced by Feed Forward Layers?
Figure 3 for When Can Self-Attention Be Replaced by Feed Forward Layers?
Figure 4 for When Can Self-Attention Be Replaced by Feed Forward Layers?
Viaarxiv icon

Recognizing Characters in Art History Using Deep Learning

Add code
Apr 01, 2020
Figure 1 for Recognizing Characters in Art History Using Deep Learning
Figure 2 for Recognizing Characters in Art History Using Deep Learning
Figure 3 for Recognizing Characters in Art History Using Deep Learning
Figure 4 for Recognizing Characters in Art History Using Deep Learning
Viaarxiv icon

DropClass and DropAdapt: Dropping classes for deep speaker representation learning

Add code
Feb 02, 2020
Figure 1 for DropClass and DropAdapt: Dropping classes for deep speaker representation learning
Figure 2 for DropClass and DropAdapt: Dropping classes for deep speaker representation learning
Figure 3 for DropClass and DropAdapt: Dropping classes for deep speaker representation learning
Figure 4 for DropClass and DropAdapt: Dropping classes for deep speaker representation learning
Viaarxiv icon