Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

Action Languages Based Actual Causality in Ethical Decision Making Contexts

May 05, 2022
Camilo Sarmiento, Gauvain Bourgne, Daniele Cavalli, Katsumi Inoue, Jean-Gabriel Ganascia

Moral responsibility is closely intermixed with causality, even if it cannot be reduced to it. Besides, rationally understanding the evolution of the physical world is inherently linked with the idea of causality. It follows that decision making applications based on automated planning, especially if they integrate references to ethical norms, have inevitably to deal with causality. Despite these considerations, much of the work in computational ethics relegates causality to the background, if not ignores it completely. This paper contribution is double. The first one is to link up two research topics$\unicode{x2014}$automated planning and causality$\unicode{x2014}$by proposing an actual causation definition suitable for action languages. This definition is a formalisation of Wright's NESS test of causation. The second is to link up computational ethics and causality by showing the importance of causality in the simulation of ethical reasoning and by enabling the domain to deal with situations that were previously out of reach thanks to the actual causation definition proposed.

* 19 pages, 5 figures 

  Access Paper or Ask Questions

Leveraging Unlabeled Data for Sketch-based Understanding

Apr 26, 2022
Javier Morales, Nils Murrugarra-Llerena, Jose M. Saavedra

Sketch-based understanding is a critical component of human cognitive learning and is a primitive communication means between humans. This topic has recently attracted the interest of the computer vision community as sketching represents a powerful tool to express static objects and dynamic scenes. Unfortunately, despite its broad application domains, the current sketch-based models strongly rely on labels for supervised training, ignoring knowledge from unlabeled data, thus limiting the underlying generalization and the applicability. Therefore, we present a study about the use of unlabeled data to improve a sketch-based model. To this end, we evaluate variations of VAE and semi-supervised VAE, and present an extension of BYOL to deal with sketches. Our results show the superiority of sketch-BYOL, which outperforms other self-supervised approaches increasing the retrieval performance for known and unknown categories. Furthermore, we show how other tasks can benefit from our proposal.

* SketchDL at CVPR 2022 

  Access Paper or Ask Questions

Robust Federated Learning Against Adversarial Attacks for Speech Emotion Recognition

Mar 09, 2022
Yi Chang, Sofiane Laridi, Zhao Ren, Gregory Palmer, Björn W. Schuller, Marco Fisichella

Due to the development of machine learning and speech processing, speech emotion recognition has been a popular research topic in recent years. However, the speech data cannot be protected when it is uploaded and processed on servers in the internet-of-things applications of speech emotion recognition. Furthermore, deep neural networks have proven to be vulnerable to human-indistinguishable adversarial perturbations. The adversarial attacks generated from the perturbations may result in deep neural networks wrongly predicting the emotional states. We propose a novel federated adversarial learning framework for protecting both data and deep neural networks. The proposed framework consists of i) federated learning for data privacy, and ii) adversarial training at the training stage and randomisation at the testing stage for model robustness. The experiments show that our proposed framework can effectively protect the speech data locally and improve the model robustness against a series of adversarial attacks.

* 11 pages, 6 figures, 3 tables 

  Access Paper or Ask Questions

Towards Rich, Portable, and Large-Scale Pedestrian Data Collection

Mar 03, 2022
Allan Wang, Abhijat Biswas, Henny Admoni, Aaron Steinfeld

Recently, pedestrian behavior research has shifted towards machine learning based methods and converged on the topic of modeling pedestrian interactions. For this, a large-scale dataset that contains rich information is needed. We propose a data collection system that is portable, which facilitates accessible large-scale data collection in diverse environments. We also couple the system with a semi-autonomous labeling pipeline for fast trajectory label production. We demonstrate the effectiveness of our system by further introducing a dataset we have collected -- the TBD pedestrian dataset. Compared with existing pedestrian datasets, our dataset contains three components: human verified labels grounded in the metric space, a combination of top-down and perspective views, and naturalistic human behavior in the presence of a socially appropriate "robot". In addition, the TBD pedestrian dataset is larger in quantity compared to similar existing datasets and contains unique pedestrian behavior.

* This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible 

  Access Paper or Ask Questions

Fighting Money Laundering with Statistics and Machine Learning: An Introduction and Review

Jan 13, 2022
Rasmus Jensen, Alexandros Iosifidis

Money laundering is a profound, global problem. Nonetheless, there is little statistical and machine learning research on the topic. In this paper, we focus on anti-money laundering in banks. To help organize existing research in the field, we propose a unifying terminology and provide a review of the literature. This is structured around two central tasks: (i) client risk profiling and (ii) suspicious behavior flagging. We find that client risk profiling is characterized by diagnostics, i.e., efforts to find and explain risk factors. Suspicious behavior flagging, on the other hand, is characterized by non-disclosed features and hand-crafted risk indices. Finally, we discuss directions for future research. One major challenge is the lack of public data sets. This may, potentially, be addressed by synthetic data generation. Other possible research directions include semi-supervised and deep learning, interpretability and fairness of the results.


  Access Paper or Ask Questions

Zero-Shot and Few-Shot Classification of Biomedical Articles in Context of the COVID-19 Pandemic

Jan 11, 2022
Simon Lupart, Benoit Favre, Vassilina Nikoulina, Salah Ait-Mokhtar

MeSH (Medical Subject Headings) is a large thesaurus created by the National Library of Medicine and used for fine-grained indexing of publications in the biomedical domain. In the context of the COVID-19 pandemic, MeSH descriptors have emerged in relation to articles published on the corresponding topic. Zero-shot classification is an adequate response for timely labeling of the stream of papers with MeSH categories. In this work, we hypothesise that rich semantic information available in MeSH has potential to improve BioBERT representations and make them more suitable for zero-shot/few-shot tasks. We frame the problem as determining if MeSH term definitions, concatenated with paper abstracts are valid instances or not, and leverage multi-task learning to induce the MeSH hierarchy in the representations thanks to a seq2seq task. Results establish a baseline on the MedLine and LitCovid datasets, and probing shows that the resulting representations convey the hierarchical relations present in MeSH.

* to be published at the AAAI-22 Workshop on Scientific Document Understanding 

  Access Paper or Ask Questions

ContraQA: Question Answering under Contradicting Contexts

Nov 04, 2021
Liangming Pan, Wenhu Chen, Min-Yen Kan, William Yang Wang

With a rise in false, inaccurate, and misleading information in propaganda, news, and social media, real-world Question Answering (QA) systems face the challenges of synthesizing and reasoning over contradicting information to derive correct answers. This urgency gives rise to the need to make QA systems robust to misinformation, a topic previously unexplored. We study the risk of misinformation to QA models by investigating the behavior of the QA model under contradicting contexts that are mixed with both real and fake information. We create the first large-scale dataset for this problem, namely Contra-QA, which contains over 10K human-written and model-generated contradicting pairs of contexts. Experiments show that QA models are vulnerable under contradicting contexts brought by misinformation. To defend against such a threat, we build a misinformation-aware QA system as a counter-measure that integrates question answering and misinformation detection in a joint fashion.

* Technical report 

  Access Paper or Ask Questions

LaughNet: synthesizing laughter utterances from waveform silhouettes and a single laughter example

Oct 11, 2021
Hieu-Thi Luong, Junichi Yamagishi

Emotional and controllable speech synthesis is a topic that has received much attention. However, most studies focused on improving the expressiveness and controllability in the context of linguistic content, even though natural verbal human communication is inseparable from spontaneous non-speech expressions such as laughter, crying, or grunting. We propose a model called LaughNet for synthesizing laughter by using waveform silhouettes as inputs. The motivation is not simply synthesizing new laughter utterances but testing a novel synthesis-control paradigm that uses an abstract representation of the waveform. We conducted basic listening test experiments, and the results showed that LaughNet can synthesize laughter utterances with moderate quality and retain the characteristics of the training example. More importantly, the generated waveforms have shapes similar to the input silhouettes. For future work, we will test the same method on other types of human nonverbal expressions and integrate it into more elaborated synthesis systems.

* Submitted to ICASSP 2022 

  Access Paper or Ask Questions

CAPE: Context-Aware Private Embeddings for Private Language Learning

Aug 27, 2021
Richard Plant, Dimitra Gkatzia, Valerio Giuffrida

Deep learning-based language models have achieved state-of-the-art results in a number of applications including sentiment analysis, topic labelling, intent classification and others. Obtaining text representations or embeddings using these models presents the possibility of encoding personally identifiable information learned from language and context cues that may present a risk to reputation or privacy. To ameliorate these issues, we propose Context-Aware Private Embeddings (CAPE), a novel approach which preserves privacy during training of embeddings. To maintain the privacy of text representations, CAPE applies calibrated noise through differential privacy, preserving the encoded semantic links while obscuring sensitive information. In addition, CAPE employs an adversarial training regime that obscures identified private variables. Experimental results demonstrate that the proposed approach reduces private information leakage better than either single intervention.

* Accepted into EMNLP21 main conference 

  Access Paper or Ask Questions

<<
302
303
304
305
306
307
308
309
310
311
312
313
314
>>