Picture for Alejandro Betancourt

Alejandro Betancourt

A framework to compare music generative models using automatic evaluation metrics extended to rhythm

Add code
Jan 19, 2021
Figure 1 for A framework to compare music generative models using automatic evaluation metrics extended to rhythm
Figure 2 for A framework to compare music generative models using automatic evaluation metrics extended to rhythm
Figure 3 for A framework to compare music generative models using automatic evaluation metrics extended to rhythm
Figure 4 for A framework to compare music generative models using automatic evaluation metrics extended to rhythm
Viaarxiv icon

Sequence Generation using Deep Recurrent Networks and Embeddings: A study case in music

Add code
Dec 02, 2020
Figure 1 for Sequence Generation using Deep Recurrent Networks and Embeddings: A study case in music
Figure 2 for Sequence Generation using Deep Recurrent Networks and Embeddings: A study case in music
Figure 3 for Sequence Generation using Deep Recurrent Networks and Embeddings: A study case in music
Figure 4 for Sequence Generation using Deep Recurrent Networks and Embeddings: A study case in music
Viaarxiv icon

Egoshots, an ego-vision life-logging dataset and semantic fidelity metric to evaluate diversity in image captioning models

Add code
Mar 27, 2020
Figure 1 for Egoshots, an ego-vision life-logging dataset and semantic fidelity metric to evaluate diversity in image captioning models
Figure 2 for Egoshots, an ego-vision life-logging dataset and semantic fidelity metric to evaluate diversity in image captioning models
Figure 3 for Egoshots, an ego-vision life-logging dataset and semantic fidelity metric to evaluate diversity in image captioning models
Figure 4 for Egoshots, an ego-vision life-logging dataset and semantic fidelity metric to evaluate diversity in image captioning models
Viaarxiv icon

Static force field representation of environments based on agents nonlinear motions

Add code
Sep 09, 2019
Figure 1 for Static force field representation of environments based on agents nonlinear motions
Figure 2 for Static force field representation of environments based on agents nonlinear motions
Figure 3 for Static force field representation of environments based on agents nonlinear motions
Figure 4 for Static force field representation of environments based on agents nonlinear motions
Viaarxiv icon

Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos

Add code
Mar 27, 2017
Figure 1 for Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos
Figure 2 for Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos
Figure 3 for Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos
Figure 4 for Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos
Viaarxiv icon

Left/Right Hand Segmentation in Egocentric Videos

Add code
Jul 21, 2016
Figure 1 for Left/Right Hand Segmentation in Egocentric Videos
Figure 2 for Left/Right Hand Segmentation in Egocentric Videos
Figure 3 for Left/Right Hand Segmentation in Egocentric Videos
Figure 4 for Left/Right Hand Segmentation in Egocentric Videos
Viaarxiv icon

The Evolution of First Person Vision Methods: A Survey

Add code
Apr 03, 2015
Figure 1 for The Evolution of First Person Vision Methods: A Survey
Figure 2 for The Evolution of First Person Vision Methods: A Survey
Figure 3 for The Evolution of First Person Vision Methods: A Survey
Figure 4 for The Evolution of First Person Vision Methods: A Survey
Viaarxiv icon