Alert button
Picture for Ciprian Chelba

Ciprian Chelba

Alert button

Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection

Add code
Bookmark button
Alert button
Aug 31, 2018
Wei Wang, Taro Watanabe, Macduff Hughes, Tetsuji Nakagawa, Ciprian Chelba

Figure 1 for Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection
Figure 2 for Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection
Figure 3 for Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection
Figure 4 for Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection
Viaarxiv icon

GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking

Add code
Bookmark button
Alert button
Jun 18, 2018
Patrick H. Chen, Si Si, Yang Li, Ciprian Chelba, Cho-jui Hsieh

Figure 1 for GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking
Figure 2 for GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking
Figure 3 for GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking
Figure 4 for GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking
Viaarxiv icon

N-gram Language Modeling using Recurrent Neural Network Estimation

Add code
Bookmark button
Alert button
Jun 20, 2017
Ciprian Chelba, Mohammad Norouzi, Samy Bengio

Figure 1 for N-gram Language Modeling using Recurrent Neural Network Estimation
Figure 2 for N-gram Language Modeling using Recurrent Neural Network Estimation
Figure 3 for N-gram Language Modeling using Recurrent Neural Network Estimation
Figure 4 for N-gram Language Modeling using Recurrent Neural Network Estimation
Viaarxiv icon

Multinomial Loss on Held-out Data for the Sparse Non-negative Matrix Language Model

Add code
Bookmark button
Alert button
Feb 22, 2016
Ciprian Chelba, Fernando Pereira

Figure 1 for Multinomial Loss on Held-out Data for the Sparse Non-negative Matrix Language Model
Figure 2 for Multinomial Loss on Held-out Data for the Sparse Non-negative Matrix Language Model
Figure 3 for Multinomial Loss on Held-out Data for the Sparse Non-negative Matrix Language Model
Figure 4 for Multinomial Loss on Held-out Data for the Sparse Non-negative Matrix Language Model
Viaarxiv icon

Skip-gram Language Modeling Using Sparse Non-negative Matrix Probability Estimation

Add code
Bookmark button
Alert button
Jun 26, 2015
Noam Shazeer, Joris Pelemans, Ciprian Chelba

Figure 1 for Skip-gram Language Modeling Using Sparse Non-negative Matrix Probability Estimation
Figure 2 for Skip-gram Language Modeling Using Sparse Non-negative Matrix Probability Estimation
Figure 3 for Skip-gram Language Modeling Using Sparse Non-negative Matrix Probability Estimation
Figure 4 for Skip-gram Language Modeling Using Sparse Non-negative Matrix Probability Estimation
Viaarxiv icon

One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling

Add code
Bookmark button
Alert button
Mar 04, 2014
Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, Tony Robinson

Figure 1 for One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling
Figure 2 for One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling
Viaarxiv icon

Large Scale Distributed Acoustic Modeling With Back-off N-grams

Add code
Bookmark button
Alert button
Feb 05, 2013
Ciprian Chelba, Peng Xu, Fernando Pereira, Thomas Richardson

Figure 1 for Large Scale Distributed Acoustic Modeling With Back-off N-grams
Figure 2 for Large Scale Distributed Acoustic Modeling With Back-off N-grams
Figure 3 for Large Scale Distributed Acoustic Modeling With Back-off N-grams
Figure 4 for Large Scale Distributed Acoustic Modeling With Back-off N-grams
Viaarxiv icon

Large Scale Language Modeling in Automatic Speech Recognition

Add code
Bookmark button
Alert button
Oct 31, 2012
Ciprian Chelba, Dan Bikel, Maria Shugrina, Patrick Nguyen, Shankar Kumar

Figure 1 for Large Scale Language Modeling in Automatic Speech Recognition
Figure 2 for Large Scale Language Modeling in Automatic Speech Recognition
Figure 3 for Large Scale Language Modeling in Automatic Speech Recognition
Figure 4 for Large Scale Language Modeling in Automatic Speech Recognition
Viaarxiv icon

Optimal size, freshness and time-frame for voice search vocabulary

Add code
Bookmark button
Alert button
Oct 31, 2012
Maryam Kamvar, Ciprian Chelba

Figure 1 for Optimal size, freshness and time-frame for voice search vocabulary
Figure 2 for Optimal size, freshness and time-frame for voice search vocabulary
Figure 3 for Optimal size, freshness and time-frame for voice search vocabulary
Figure 4 for Optimal size, freshness and time-frame for voice search vocabulary
Viaarxiv icon

Richer Syntactic Dependencies for Structured Language Modeling

Add code
Bookmark button
Alert button
Oct 03, 2001
Ciprian Chelba, Peng Xu

Figure 1 for Richer Syntactic Dependencies for Structured Language Modeling
Figure 2 for Richer Syntactic Dependencies for Structured Language Modeling
Figure 3 for Richer Syntactic Dependencies for Structured Language Modeling
Figure 4 for Richer Syntactic Dependencies for Structured Language Modeling
Viaarxiv icon