Alert button
Picture for James Glass

James Glass

Alert button

Contrastive Language Adaptation for Cross-Lingual Stance Detection

Oct 04, 2019
Mitra Mohtarami, James Glass, Preslav Nakov

Figure 1 for Contrastive Language Adaptation for Cross-Lingual Stance Detection
Figure 2 for Contrastive Language Adaptation for Cross-Lingual Stance Detection
Figure 3 for Contrastive Language Adaptation for Cross-Lingual Stance Detection
Figure 4 for Contrastive Language Adaptation for Cross-Lingual Stance Detection
Viaarxiv icon

DARTS: Dialectal Arabic Transcription System

Sep 26, 2019
Sameer Khurana, Ahmed Ali, James Glass

Figure 1 for DARTS: Dialectal Arabic Transcription System
Figure 2 for DARTS: Dialectal Arabic Transcription System
Figure 3 for DARTS: Dialectal Arabic Transcription System
Figure 4 for DARTS: Dialectal Arabic Transcription System
Viaarxiv icon

Automatic Fact-Checking Using Context and Discourse Information

Aug 04, 2019
Pepa Atanasova, Preslav Nakov, Lluís Màrquez, Alberto Barrón-Cedeño, Georgi Karadzhov, Tsvetomila Mihaylova, Mitra Mohtarami, James Glass

Figure 1 for Automatic Fact-Checking Using Context and Discourse Information
Figure 2 for Automatic Fact-Checking Using Context and Discourse Information
Figure 3 for Automatic Fact-Checking Using Context and Discourse Information
Figure 4 for Automatic Fact-Checking Using Context and Discourse Information
Viaarxiv icon

Transfer Learning from Audio-Visual Grounding to Speech Recognition

Jul 09, 2019
Wei-Ning Hsu, David Harwath, James Glass

Figure 1 for Transfer Learning from Audio-Visual Grounding to Speech Recognition
Figure 2 for Transfer Learning from Audio-Visual Grounding to Speech Recognition
Figure 3 for Transfer Learning from Audio-Visual Grounding to Speech Recognition
Figure 4 for Transfer Learning from Audio-Visual Grounding to Speech Recognition
Viaarxiv icon

Analyzing Phonetic and Graphemic Representations in End-to-End Automatic Speech Recognition

Jul 09, 2019
Yonatan Belinkov, Ahmed Ali, James Glass

Figure 1 for Analyzing Phonetic and Graphemic Representations in End-to-End Automatic Speech Recognition
Figure 2 for Analyzing Phonetic and Graphemic Representations in End-to-End Automatic Speech Recognition
Figure 3 for Analyzing Phonetic and Graphemic Representations in End-to-End Automatic Speech Recognition
Figure 4 for Analyzing Phonetic and Graphemic Representations in End-to-End Automatic Speech Recognition
Viaarxiv icon

Towards Transfer Learning for End-to-End Speech Synthesis from Deep Pre-Trained Language Models

Jun 17, 2019
Wei Fang, Yu-An Chung, James Glass

Figure 1 for Towards Transfer Learning for End-to-End Speech Synthesis from Deep Pre-Trained Language Models
Figure 2 for Towards Transfer Learning for End-to-End Speech Synthesis from Deep Pre-Trained Language Models
Figure 3 for Towards Transfer Learning for End-to-End Speech Synthesis from Deep Pre-Trained Language Models
Figure 4 for Towards Transfer Learning for End-to-End Speech Synthesis from Deep Pre-Trained Language Models
Viaarxiv icon

FAKTA: An Automatic End-to-End Fact Checking System

Jun 07, 2019
Moin Nadeem, Wei Fang, Brian Xu, Mitra Mohtarami, James Glass

Figure 1 for FAKTA: An Automatic End-to-End Fact Checking System
Figure 2 for FAKTA: An Automatic End-to-End Fact Checking System
Figure 3 for FAKTA: An Automatic End-to-End Fact Checking System
Figure 4 for FAKTA: An Automatic End-to-End Fact Checking System
Viaarxiv icon

Improving Neural Language Models by Segmenting, Attending, and Predicting the Future

Jun 04, 2019
Hongyin Luo, Lan Jiang, Yonatan Belinkov, James Glass

Figure 1 for Improving Neural Language Models by Segmenting, Attending, and Predicting the Future
Figure 2 for Improving Neural Language Models by Segmenting, Attending, and Predicting the Future
Figure 3 for Improving Neural Language Models by Segmenting, Attending, and Predicting the Future
Figure 4 for Improving Neural Language Models by Segmenting, Attending, and Predicting the Future
Viaarxiv icon

Quantifying Exposure Bias for Neural Language Generation

May 25, 2019
Tianxing He, Jingzhao Zhang, Zhiming Zhou, James Glass

Figure 1 for Quantifying Exposure Bias for Neural Language Generation
Figure 2 for Quantifying Exposure Bias for Neural Language Generation
Figure 3 for Quantifying Exposure Bias for Neural Language Generation
Figure 4 for Quantifying Exposure Bias for Neural Language Generation
Viaarxiv icon

Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification

May 11, 2019
Achintya kr. Sarkar, Zheng-Hua Tan, Hao Tang, Suwon Shon, James Glass

Figure 1 for Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification
Figure 2 for Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification
Figure 3 for Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification
Figure 4 for Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification
Viaarxiv icon