Alert button
Picture for Kat Agres

Kat Agres

Alert button

Yong Siew Toh Conservatory of Music, National University of Singapore

Predicting emotion from music videos: exploring the relative contribution of visual and auditory information to affective responses

Add code
Bookmark button
Alert button
Feb 19, 2022
Phoebe Chua, Dimos Makris, Dorien Herremans, Gemma Roig, Kat Agres

Figure 1 for Predicting emotion from music videos: exploring the relative contribution of visual and auditory information to affective responses
Figure 2 for Predicting emotion from music videos: exploring the relative contribution of visual and auditory information to affective responses
Figure 3 for Predicting emotion from music videos: exploring the relative contribution of visual and auditory information to affective responses
Figure 4 for Predicting emotion from music videos: exploring the relative contribution of visual and auditory information to affective responses
Viaarxiv icon

A dataset and classification model for Malay, Hindi, Tamil and Chinese music

Add code
Bookmark button
Alert button
Sep 15, 2020
Fajilatun Nahar, Kat Agres, Balamurali BT, Dorien Herremans

Figure 1 for A dataset and classification model for Malay, Hindi, Tamil and Chinese music
Figure 2 for A dataset and classification model for Malay, Hindi, Tamil and Chinese music
Viaarxiv icon

Singing Voice Conversion with Disentangled Representations of Singer and Vocal Technique Using Variational Autoencoders

Add code
Bookmark button
Alert button
Jan 28, 2020
Yin-Jyun Luo, Chin-Chen Hsu, Kat Agres, Dorien Herremans

Figure 1 for Singing Voice Conversion with Disentangled Representations of Singer and Vocal Technique Using Variational Autoencoders
Figure 2 for Singing Voice Conversion with Disentangled Representations of Singer and Vocal Technique Using Variational Autoencoders
Figure 3 for Singing Voice Conversion with Disentangled Representations of Singer and Vocal Technique Using Variational Autoencoders
Viaarxiv icon

nnAudio: An on-the-fly GPU Audio to Spectrogram Conversion Toolbox Using 1D Convolution Neural Networks

Add code
Bookmark button
Alert button
Dec 31, 2019
Kin Wai Cheuk, Hans Anderson, Kat Agres, Dorien Herremans

Figure 1 for nnAudio: An on-the-fly GPU Audio to Spectrogram Conversion Toolbox Using 1D Convolution Neural Networks
Figure 2 for nnAudio: An on-the-fly GPU Audio to Spectrogram Conversion Toolbox Using 1D Convolution Neural Networks
Figure 3 for nnAudio: An on-the-fly GPU Audio to Spectrogram Conversion Toolbox Using 1D Convolution Neural Networks
Figure 4 for nnAudio: An on-the-fly GPU Audio to Spectrogram Conversion Toolbox Using 1D Convolution Neural Networks
Viaarxiv icon

Learning Disentangled Representations of Timbre and Pitch for Musical Instrument Sounds Using Gaussian Mixture Variational Autoencoders

Add code
Bookmark button
Alert button
Jun 29, 2019
Yin-Jyun Luo, Kat Agres, Dorien Herremans

Figure 1 for Learning Disentangled Representations of Timbre and Pitch for Musical Instrument Sounds Using Gaussian Mixture Variational Autoencoders
Figure 2 for Learning Disentangled Representations of Timbre and Pitch for Musical Instrument Sounds Using Gaussian Mixture Variational Autoencoders
Figure 3 for Learning Disentangled Representations of Timbre and Pitch for Musical Instrument Sounds Using Gaussian Mixture Variational Autoencoders
Figure 4 for Learning Disentangled Representations of Timbre and Pitch for Musical Instrument Sounds Using Gaussian Mixture Variational Autoencoders
Viaarxiv icon

From Context to Concept: Exploring Semantic Relationships in Music with Word2Vec

Add code
Bookmark button
Alert button
Nov 29, 2018
Ching-Hua Chuan, Kat Agres, Dorien Herremans

Figure 1 for From Context to Concept: Exploring Semantic Relationships in Music with Word2Vec
Figure 2 for From Context to Concept: Exploring Semantic Relationships in Music with Word2Vec
Figure 3 for From Context to Concept: Exploring Semantic Relationships in Music with Word2Vec
Figure 4 for From Context to Concept: Exploring Semantic Relationships in Music with Word2Vec
Viaarxiv icon

From Bach to the Beatles: The simulation of human tonal expectation using ecologically-trained predictive models

Add code
Bookmark button
Alert button
Jul 19, 2017
Carlos Cancino-Chacón, Maarten Grachten, Kat Agres

Figure 1 for From Bach to the Beatles: The simulation of human tonal expectation using ecologically-trained predictive models
Figure 2 for From Bach to the Beatles: The simulation of human tonal expectation using ecologically-trained predictive models
Figure 3 for From Bach to the Beatles: The simulation of human tonal expectation using ecologically-trained predictive models
Viaarxiv icon