Alert button
Picture for Hideki Nakayama

Hideki Nakayama

Alert button

Real-time Neural-based Input Method

Add code
Bookmark button
Alert button
Oct 19, 2018
Jiali Yao, Raphael Shu, Xinjian Li, Katsutoshi Ohtsuki, Hideki Nakayama

Figure 1 for Real-time Neural-based Input Method
Figure 2 for Real-time Neural-based Input Method
Figure 3 for Real-time Neural-based Input Method
Figure 4 for Real-time Neural-based Input Method
Viaarxiv icon

Semantic Aware Attention Based Deep Object Co-segmentation

Add code
Bookmark button
Alert button
Oct 16, 2018
Hong Chen, Yifei Huang, Hideki Nakayama

Figure 1 for Semantic Aware Attention Based Deep Object Co-segmentation
Figure 2 for Semantic Aware Attention Based Deep Object Co-segmentation
Figure 3 for Semantic Aware Attention Based Deep Object Co-segmentation
Figure 4 for Semantic Aware Attention Based Deep Object Co-segmentation
Viaarxiv icon

Discrete Structural Planning for Neural Machine Translation

Add code
Bookmark button
Alert button
Aug 14, 2018
Raphael Shu, Hideki Nakayama

Figure 1 for Discrete Structural Planning for Neural Machine Translation
Figure 2 for Discrete Structural Planning for Neural Machine Translation
Figure 3 for Discrete Structural Planning for Neural Machine Translation
Figure 4 for Discrete Structural Planning for Neural Machine Translation
Viaarxiv icon

Deep Learning for Forecasting Stock Returns in the Cross-Section

Add code
Bookmark button
Alert button
Jun 13, 2018
Masaya Abe, Hideki Nakayama

Figure 1 for Deep Learning for Forecasting Stock Returns in the Cross-Section
Figure 2 for Deep Learning for Forecasting Stock Returns in the Cross-Section
Figure 3 for Deep Learning for Forecasting Stock Returns in the Cross-Section
Figure 4 for Deep Learning for Forecasting Stock Returns in the Cross-Section
Viaarxiv icon

Parameter Reference Loss for Unsupervised Domain Adaptation

Add code
Bookmark button
Alert button
Dec 05, 2017
Jiren Jin, Richard G. Calland, Takeru Miyato, Brian K. Vogel, Hideki Nakayama

Figure 1 for Parameter Reference Loss for Unsupervised Domain Adaptation
Figure 2 for Parameter Reference Loss for Unsupervised Domain Adaptation
Figure 3 for Parameter Reference Loss for Unsupervised Domain Adaptation
Figure 4 for Parameter Reference Loss for Unsupervised Domain Adaptation
Viaarxiv icon

Compressing Word Embeddings via Deep Compositional Code Learning

Add code
Bookmark button
Alert button
Nov 17, 2017
Raphael Shu, Hideki Nakayama

Figure 1 for Compressing Word Embeddings via Deep Compositional Code Learning
Figure 2 for Compressing Word Embeddings via Deep Compositional Code Learning
Figure 3 for Compressing Word Embeddings via Deep Compositional Code Learning
Figure 4 for Compressing Word Embeddings via Deep Compositional Code Learning
Viaarxiv icon

Zero-resource Machine Translation by Multimodal Encoder-decoder Network with Multimedia Pivot

Add code
Bookmark button
Alert button
Jul 23, 2017
Hideki Nakayama, Noriki Nishida

Figure 1 for Zero-resource Machine Translation by Multimodal Encoder-decoder Network with Multimedia Pivot
Figure 2 for Zero-resource Machine Translation by Multimodal Encoder-decoder Network with Multimedia Pivot
Figure 3 for Zero-resource Machine Translation by Multimodal Encoder-decoder Network with Multimedia Pivot
Figure 4 for Zero-resource Machine Translation by Multimodal Encoder-decoder Network with Multimedia Pivot
Viaarxiv icon

Single-Queue Decoding for Neural Machine Translation

Add code
Bookmark button
Alert button
Jul 08, 2017
Raphael Shu, Hideki Nakayama

Figure 1 for Single-Queue Decoding for Neural Machine Translation
Figure 2 for Single-Queue Decoding for Neural Machine Translation
Figure 3 for Single-Queue Decoding for Neural Machine Translation
Viaarxiv icon

Later-stage Minimum Bayes-Risk Decoding for Neural Machine Translation

Add code
Bookmark button
Alert button
Jun 08, 2017
Raphael Shu, Hideki Nakayama

Figure 1 for Later-stage Minimum Bayes-Risk Decoding for Neural Machine Translation
Figure 2 for Later-stage Minimum Bayes-Risk Decoding for Neural Machine Translation
Viaarxiv icon

An Empirical Study of Adequate Vision Span for Attention-Based Neural Machine Translation

Add code
Bookmark button
Alert button
Jun 08, 2017
Raphael Shu, Hideki Nakayama

Figure 1 for An Empirical Study of Adequate Vision Span for Attention-Based Neural Machine Translation
Figure 2 for An Empirical Study of Adequate Vision Span for Attention-Based Neural Machine Translation
Figure 3 for An Empirical Study of Adequate Vision Span for Attention-Based Neural Machine Translation
Figure 4 for An Empirical Study of Adequate Vision Span for Attention-Based Neural Machine Translation
Viaarxiv icon