Alert button
Picture for Danushka Bollegala

Danushka Bollegala

Alert button

Unmasking the Mask -- Evaluating Social Biases in Masked Language Models

Add code
Bookmark button
Alert button
Apr 15, 2021
Masahiro Kaneko, Danushka Bollegala

Figure 1 for Unmasking the Mask -- Evaluating Social Biases in Masked Language Models
Figure 2 for Unmasking the Mask -- Evaluating Social Biases in Masked Language Models
Figure 3 for Unmasking the Mask -- Evaluating Social Biases in Masked Language Models
Figure 4 for Unmasking the Mask -- Evaluating Social Biases in Masked Language Models
Viaarxiv icon

I Wish I Would Have Loved This One, But I Didn't -- A Multilingual Dataset for Counterfactual Detection in Product Reviews

Add code
Bookmark button
Alert button
Apr 14, 2021
James O'Neill, Polina Rozenshtein, Ryuichi Kiryo, Motoko Kubota, Danushka Bollegala

Figure 1 for I Wish I Would Have Loved This One, But I Didn't -- A Multilingual Dataset for Counterfactual Detection in Product Reviews
Figure 2 for I Wish I Would Have Loved This One, But I Didn't -- A Multilingual Dataset for Counterfactual Detection in Product Reviews
Figure 3 for I Wish I Would Have Loved This One, But I Didn't -- A Multilingual Dataset for Counterfactual Detection in Product Reviews
Figure 4 for I Wish I Would Have Loved This One, But I Didn't -- A Multilingual Dataset for Counterfactual Detection in Product Reviews
Viaarxiv icon

Semantically-Conditioned Negative Samples for Efficient Contrastive Learning

Add code
Bookmark button
Alert button
Feb 12, 2021
James O' Neill, Danushka Bollegala

Figure 1 for Semantically-Conditioned Negative Samples for Efficient Contrastive Learning
Figure 2 for Semantically-Conditioned Negative Samples for Efficient Contrastive Learning
Figure 3 for Semantically-Conditioned Negative Samples for Efficient Contrastive Learning
Figure 4 for Semantically-Conditioned Negative Samples for Efficient Contrastive Learning
Viaarxiv icon

RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding

Add code
Bookmark button
Alert button
Jan 25, 2021
Danushka Bollegala, Huda Hakami, Yuichi Yoshida, Ken-ichi Kawarabayashi

Figure 1 for RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding
Figure 2 for RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding
Figure 3 for RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding
Figure 4 for RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding
Viaarxiv icon

Dictionary-based Debiasing of Pre-trained Word Embeddings

Add code
Bookmark button
Alert button
Jan 23, 2021
Masahiro Kaneko, Danushka Bollegala

Figure 1 for Dictionary-based Debiasing of Pre-trained Word Embeddings
Figure 2 for Dictionary-based Debiasing of Pre-trained Word Embeddings
Figure 3 for Dictionary-based Debiasing of Pre-trained Word Embeddings
Figure 4 for Dictionary-based Debiasing of Pre-trained Word Embeddings
Viaarxiv icon

Debiasing Pre-trained Contextualised Embeddings

Add code
Bookmark button
Alert button
Jan 23, 2021
Masahiro Kaneko, Danushka Bollegala

Figure 1 for Debiasing Pre-trained Contextualised Embeddings
Figure 2 for Debiasing Pre-trained Contextualised Embeddings
Figure 3 for Debiasing Pre-trained Contextualised Embeddings
Figure 4 for Debiasing Pre-trained Contextualised Embeddings
Viaarxiv icon

$k$-Neighbor Based Curriculum Sampling for Sequence Prediction

Add code
Bookmark button
Alert button
Jan 22, 2021
James O' Neill, Danushka Bollegala

Figure 1 for $k$-Neighbor Based Curriculum Sampling for Sequence Prediction
Figure 2 for $k$-Neighbor Based Curriculum Sampling for Sequence Prediction
Figure 3 for $k$-Neighbor Based Curriculum Sampling for Sequence Prediction
Figure 4 for $k$-Neighbor Based Curriculum Sampling for Sequence Prediction
Viaarxiv icon

Autoencoding Improves Pre-trained Word Embeddings

Add code
Bookmark button
Alert button
Oct 27, 2020
Masahiro Kaneko, Danushka Bollegala

Figure 1 for Autoencoding Improves Pre-trained Word Embeddings
Figure 2 for Autoencoding Improves Pre-trained Word Embeddings
Figure 3 for Autoencoding Improves Pre-trained Word Embeddings
Figure 4 for Autoencoding Improves Pre-trained Word Embeddings
Viaarxiv icon

Spatio-temporal Attention Model for Tactile Texture Recognition

Add code
Bookmark button
Alert button
Aug 10, 2020
Guanqun Cao, Yi Zhou, Danushka Bollegala, Shan Luo

Figure 1 for Spatio-temporal Attention Model for Tactile Texture Recognition
Figure 2 for Spatio-temporal Attention Model for Tactile Texture Recognition
Figure 3 for Spatio-temporal Attention Model for Tactile Texture Recognition
Figure 4 for Spatio-temporal Attention Model for Tactile Texture Recognition
Viaarxiv icon

Do not let the history haunt you -- Mitigating Compounding Errors in Conversational Question Answering

Add code
Bookmark button
Alert button
May 12, 2020
Angrosh Mandya, James O'Neill, Danushka Bollegala, Frans Coenen

Figure 1 for Do not let the history haunt you -- Mitigating Compounding Errors in Conversational Question Answering
Figure 2 for Do not let the history haunt you -- Mitigating Compounding Errors in Conversational Question Answering
Figure 3 for Do not let the history haunt you -- Mitigating Compounding Errors in Conversational Question Answering
Figure 4 for Do not let the history haunt you -- Mitigating Compounding Errors in Conversational Question Answering
Viaarxiv icon