Alert button
Picture for Markus Schedl

Markus Schedl

Alert button

Domain Information Control at Inference Time for Acoustic Scene Classification

Jun 13, 2023
Shahed Masoudian, Khaled Koutini, Markus Schedl, Gerhard Widmer, Navid Rekabsaz

Figure 1 for Domain Information Control at Inference Time for Acoustic Scene Classification
Figure 2 for Domain Information Control at Inference Time for Acoustic Scene Classification
Figure 3 for Domain Information Control at Inference Time for Acoustic Scene Classification
Figure 4 for Domain Information Control at Inference Time for Acoustic Scene Classification

Domain shift is considered a challenge in machine learning as it causes significant degradation of model performance. In the Acoustic Scene Classification task (ASC), domain shift is mainly caused by different recording devices. Several studies have already targeted domain generalization to improve the performance of ASC models on unseen domains, such as new devices. Recently, the Controllable Gate Adapter ConGater has been proposed in Natural Language Processing to address the biased training data problem. ConGater allows controlling the debiasing process at inference time. ConGater's main advantage is the continuous and selective debiasing of a trained model, during inference. In this work, we adapt ConGater to the audio spectrogram transformer for an acoustic scene classification task. We show that ConGater can be used to selectively adapt the learned representations to be invariant to device domain shifts such as recording devices. Our analysis shows that ConGater can progressively remove device information from the learned representations and improve the model generalization, especially under domain shift conditions (e.g. unseen devices). We show that information removal can be extended to both device and location domain. Finally, we demonstrate ConGater's ability to enhance specific device performance without further training.

Viaarxiv icon

A Study on Accuracy, Miscalibration, and Popularity Bias in Recommendations

Mar 01, 2023
Dominik Kowald, Gregor Mayr, Markus Schedl, Elisabeth Lex

Figure 1 for A Study on Accuracy, Miscalibration, and Popularity Bias in Recommendations
Figure 2 for A Study on Accuracy, Miscalibration, and Popularity Bias in Recommendations
Figure 3 for A Study on Accuracy, Miscalibration, and Popularity Bias in Recommendations
Figure 4 for A Study on Accuracy, Miscalibration, and Popularity Bias in Recommendations

Recent research has suggested different metrics to measure the inconsistency of recommendation performance, including the accuracy difference between user groups, miscalibration, and popularity lift. However, a study that relates miscalibration and popularity lift to recommendation accuracy across different user groups is still missing. Additionally, it is unclear if particular genres contribute to the emergence of inconsistency in recommendation performance across user groups. In this paper, we present an analysis of these three aspects of five well-known recommendation algorithms for user groups that differ in their preference for popular content. Additionally, we study how different genres affect the inconsistency of recommendation performance, and how this is aligned with the popularity of the genres. Using data from LastFm, MovieLens, and MyAnimeList, we present two key findings. First, we find that users with little interest in popular content receive the worst recommendation accuracy, and that this is aligned with miscalibration and popularity lift. Second, our experiments show that particular genres contribute to a different extent to the inconsistency of recommendation performance, especially in terms of miscalibration in the case of the MyAnimeList dataset.

* Accepted at BIAS@ECIR WS 2023 
Viaarxiv icon

Parameter-efficient Modularised Bias Mitigation via AdapterFusion

Feb 13, 2023
Deepak Kumar, Oleg Lesota, George Zerveas, Daniel Cohen, Carsten Eickhoff, Markus Schedl, Navid Rekabsaz

Figure 1 for Parameter-efficient Modularised Bias Mitigation via AdapterFusion
Figure 2 for Parameter-efficient Modularised Bias Mitigation via AdapterFusion
Figure 3 for Parameter-efficient Modularised Bias Mitigation via AdapterFusion
Figure 4 for Parameter-efficient Modularised Bias Mitigation via AdapterFusion

Large pre-trained language models contain societal biases and carry along these biases to downstream tasks. Current in-processing bias mitigation approaches (like adversarial training) impose debiasing by updating a model's parameters, effectively transferring the model to a new, irreversible debiased state. In this work, we propose a novel approach to develop stand-alone debiasing functionalities separate from the model, which can be integrated into the model on-demand, while keeping the core model untouched. Drawing from the concept of AdapterFusion in multi-task learning, we introduce DAM (Debiasing with Adapter Modules) - a debiasing approach to first encapsulate arbitrary bias mitigation functionalities into separate adapters, and then add them to the model on-demand in order to deliver fairness qualities. We conduct a large set of experiments on three classification tasks with gender, race, and age as protected attributes. Our results show that DAM improves or maintains the effectiveness of bias mitigation, avoids catastrophic forgetting in a multi-attribute scenario, and maintains on-par task performance, while granting parameter-efficiency and easy switching between the original and debiased models.

* Accepted at EACL 2023 
Viaarxiv icon

ReuseKNN: Neighborhood Reuse for Privacy-Aware Recommendations

Jun 23, 2022
Peter Müllner, Markus Schedl, Elisabeth Lex, Dominik Kowald

Figure 1 for ReuseKNN: Neighborhood Reuse for Privacy-Aware Recommendations
Figure 2 for ReuseKNN: Neighborhood Reuse for Privacy-Aware Recommendations
Figure 3 for ReuseKNN: Neighborhood Reuse for Privacy-Aware Recommendations
Figure 4 for ReuseKNN: Neighborhood Reuse for Privacy-Aware Recommendations

User-based KNN recommender systems (UserKNN) utilize the rating data of a target user's k nearest neighbors in the recommendation process. This, however, increases the privacy risk of the neighbors since their rating data might be exposed to other users or malicious parties. To reduce this risk, existing work applies differential privacy by adding randomness to the neighbors' ratings, which reduces the accuracy of UserKNN. In this work, we introduce ReuseKNN, a novel privacy-aware recommender system. The main idea is to identify small but highly reusable neighborhoods so that (i) only a minimal set of users requires protection with differential privacy, and (ii) most users do not need to be protected with differential privacy, since they are only rarely exploited as neighbors. In our experiments on five diverse datasets, we make two key observations: Firstly, ReuseKNN requires significantly smaller neighborhoods, and thus, fewer neighbors need to be protected with differential privacy compared to traditional UserKNN. Secondly, despite the small neighborhoods, ReuseKNN outperforms UserKNN and a fully differentially private approach in terms of accuracy. Overall, ReuseKNN's recommendation process leads to significantly less privacy risk for users than in the case of UserKNN

* 27 pages, 8 figures, 7 tables, under review 
Viaarxiv icon

Unlearning Protected User Attributes in Recommendations with Adversarial Training

Jun 09, 2022
Christian Ganhör, David Penz, Navid Rekabsaz, Oleg Lesota, Markus Schedl

Figure 1 for Unlearning Protected User Attributes in Recommendations with Adversarial Training
Figure 2 for Unlearning Protected User Attributes in Recommendations with Adversarial Training
Figure 3 for Unlearning Protected User Attributes in Recommendations with Adversarial Training
Figure 4 for Unlearning Protected User Attributes in Recommendations with Adversarial Training

Collaborative filtering algorithms capture underlying consumption patterns, including the ones specific to particular demographics or protected information of users, e.g. gender, race, and location. These encoded biases can influence the decision of a recommendation system (RS) towards further separation of the contents provided to various demographic subgroups, and raise privacy concerns regarding the disclosure of users' protected attributes. In this work, we investigate the possibility and challenges of removing specific protected information of users from the learned interaction representations of a RS algorithm, while maintaining its effectiveness. Specifically, we incorporate adversarial training into the state-of-the-art MultVAE architecture, resulting in a novel model, Adversarial Variational Auto-Encoder with Multinomial Likelihood (Adv-MultVAE), which aims at removing the implicit information of protected attributes while preserving recommendation performance. We conduct experiments on the MovieLens-1M and LFM-2b-DemoBias datasets, and evaluate the effectiveness of the bias mitigation method based on the inability of external attackers in revealing the users' gender information from the model. Comparing with baseline MultVAE, the results show that Adv-MultVAE, with marginal deterioration in performance (w.r.t. NDCG and recall), largely mitigates inherent biases in the model on both datasets.

* Accepted at SIGIR 2022 
Viaarxiv icon

Do Perceived Gender Biases in Retrieval Results Affect Relevance Judgements?

Mar 03, 2022
Klara Krieg, Emilia Parada-Cabaleiro, Markus Schedl, Navid Rekabsaz

Figure 1 for Do Perceived Gender Biases in Retrieval Results Affect Relevance Judgements?
Figure 2 for Do Perceived Gender Biases in Retrieval Results Affect Relevance Judgements?
Figure 3 for Do Perceived Gender Biases in Retrieval Results Affect Relevance Judgements?
Figure 4 for Do Perceived Gender Biases in Retrieval Results Affect Relevance Judgements?

This work investigates the effect of gender-stereotypical biases in the content of retrieved results on the relevance judgement of users/annotators. In particular, since relevance in information retrieval (IR) is a multi-dimensional concept, we study whether the value and quality of the retrieved documents for some bias-sensitive queries can be judged differently when the content of the documents represents different genders. To this aim, we conduct a set of experiments where the genders of the participants are known as well as experiments where the participants genders are not specified. The set of experiments comprise of retrieval tasks, where participants perform a rated relevance judgement for different search query and search result document compilations. The shown documents contain different gender indications and are either relevant or non-relevant to the query. The results show the differences between the average judged relevance scores among documents with various gender contents. Our work initiates further research on the connection of the perception of gender stereotypes in users with their judgements and effects on IR systems, and aim to raise awareness about the possible biases in this domain.

* Accepted at workshop on Algorithmic Bias in Search and Recommendation at ECIR 2022 
Viaarxiv icon

Explainability in Music Recommender Systems

Jan 25, 2022
Darius Afchar, Alessandro B. Melchiorre, Markus Schedl, Romain Hennequin, Elena V. Epure, Manuel Moussallam

Figure 1 for Explainability in Music Recommender Systems
Figure 2 for Explainability in Music Recommender Systems
Figure 3 for Explainability in Music Recommender Systems
Figure 4 for Explainability in Music Recommender Systems

The most common way to listen to recorded music nowadays is via streaming platforms which provide access to tens of millions of tracks. To assist users in effectively browsing these large catalogs, the integration of Music Recommender Systems (MRSs) has become essential. Current real-world MRSs are often quite complex and optimized for recommendation accuracy. They combine several building blocks based on collaborative filtering and content-based recommendation. This complexity can hinder the ability to explain recommendations to end users, which is particularly important for recommendations perceived as unexpected or inappropriate. While pure recommendation performance often correlates with user satisfaction, explainability has a positive impact on other factors such as trust and forgiveness, which are ultimately essential to maintain user loyalty. In this article, we discuss how explainability can be addressed in the context of MRSs. We provide perspectives on how explainability could improve music recommendation algorithms and enhance user experience. First, we review common dimensions and goals of recommenders' explainability and in general of eXplainable Artificial Intelligence (XAI), and elaborate on the extent to which these apply -- or need to be adapted -- to the specific characteristics of music consumption and recommendation. Then, we show how explainability components can be integrated within a MRS and in what form explanations can be provided. Since the evaluation of explanation quality is decoupled from pure accuracy-based evaluation criteria, we also discuss requirements and strategies for evaluating explanations of music recommendations. Finally, we describe the current challenges for introducing explainability within a large-scale industrial music recommender system and provide research perspectives.

* To appear in AI Magazine, Special Topic on Recommender Systems 2022 
Viaarxiv icon

Grep-BiasIR: A Dataset for Investigating Gender Representation-Bias in Information Retrieval Results

Jan 19, 2022
Klara Krieg, Emilia Parada-Cabaleiro, Gertraud Medicus, Oleg Lesota, Markus Schedl, Navid Rekabsaz

Figure 1 for Grep-BiasIR: A Dataset for Investigating Gender Representation-Bias in Information Retrieval Results
Figure 2 for Grep-BiasIR: A Dataset for Investigating Gender Representation-Bias in Information Retrieval Results
Figure 3 for Grep-BiasIR: A Dataset for Investigating Gender Representation-Bias in Information Retrieval Results

The results of information retrieval (IR) systems on specific queries can reflect the existing societal biases and stereotypes, which will be further propagated and straightened through interactions of the uses with the systems. We introduce Grep-BiasIR, a novel thoroughly-audited dataset which aim to facilitate the studies of gender bias in the retrieved results of IR systems. The Grep-BiasIR dataset offers 105 bias-sensitive neutral search queries, where each query is accompanied with a set of relevant and non-relevant documents with contents indicating various genders. The dataset is available at https://github.com/KlaraKrieg/GrepBiasIR.

Viaarxiv icon

Analyzing Item Popularity Bias of Music Recommender Systems: Are Different Genders Equally Affected?

Aug 16, 2021
Oleg Lesota, Alessandro B. Melchiorre, Navid Rekabsaz, Stefan Brandl, Dominik Kowald, Elisabeth Lex, Markus Schedl

Figure 1 for Analyzing Item Popularity Bias of Music Recommender Systems: Are Different Genders Equally Affected?
Figure 2 for Analyzing Item Popularity Bias of Music Recommender Systems: Are Different Genders Equally Affected?
Figure 3 for Analyzing Item Popularity Bias of Music Recommender Systems: Are Different Genders Equally Affected?

Several studies have identified discrepancies between the popularity of items in user profiles and the corresponding recommendation lists. Such behavior, which concerns a variety of recommendation algorithms, is referred to as popularity bias. Existing work predominantly adopts simple statistical measures, such as the difference of mean or median popularity, to quantify popularity bias. Moreover, it does so irrespective of user characteristics other than the inclination to popular content. In this work, in contrast, we propose to investigate popularity differences (between the user profile and recommendation list) in terms of median, a variety of statistical moments, as well as similarity measures that consider the entire popularity distributions (Kullback-Leibler divergence and Kendall's tau rank-order correlation). This results in a more detailed picture of the characteristics of popularity bias. Furthermore, we investigate whether such algorithmic popularity bias affects users of different genders in the same way. We focus on music recommendation and conduct experiments on the recently released standardized LFM-2b dataset, containing listening profiles of Last.fm users. We investigate the algorithmic popularity bias of seven common recommendation algorithms (five collaborative filtering and two baselines). Our experiments show that (1) the studied metrics provide novel insights into popularity bias in comparison with only using average differences, (2) algorithms less inclined towards popularity bias amplification do not necessarily perform worse in terms of utility (NDCG), (3) the majority of the investigated recommenders intensify the popularity bias of the female users.

* RecSys 2021 - LBR 
Viaarxiv icon

Predicting Music Relistening Behavior Using the ACT-R Framework

Aug 05, 2021
Markus Reiter-Haas, Emilia Parada-Cabaleiro, Markus Schedl, Elham Motamedi, Marko Tkalcic, Elisabeth Lex

Figure 1 for Predicting Music Relistening Behavior Using the ACT-R Framework
Figure 2 for Predicting Music Relistening Behavior Using the ACT-R Framework

Providing suitable recommendations is of vital importance to improve the user satisfaction of music recommender systems. Here, users often listen to the same track repeatedly and appreciate recommendations of the same song multiple times. Thus, accounting for users' relistening behavior is critical for music recommender systems. In this paper, we describe a psychology-informed approach to model and predict music relistening behavior that is inspired by studies in music psychology, which relate music preferences to human memory. We adopt a well-established psychological theory of human cognition that models the operations of human memory, i.e., Adaptive Control of Thought-Rational (ACT-R). In contrast to prior work, which uses only the base-level component of ACT-R, we utilize five components of ACT-R, i.e., base-level, spreading, partial matching, valuation, and noise, to investigate the effect of five factors on music relistening behavior: (i) recency and frequency of prior exposure to tracks, (ii) co-occurrence of tracks, (iii) the similarity between tracks, (iv) familiarity with tracks, and (v) randomness in behavior. On a dataset of 1.7 million listening events from Last.fm, we evaluate the performance of our approach by sequentially predicting the next track(s) in user sessions. We find that recency and frequency of prior exposure to tracks is an effective predictor of relistening behavior. Besides, considering the co-occurrence of tracks and familiarity with tracks further improves performance in terms of R-precision. We hope that our work inspires future research on the merits of considering cognitive aspects of memory retrieval to model and predict complex user behavior.

* Accepted for publication in RecSys'21 late-breaking results 
Viaarxiv icon