Alert button
Picture for John Shawe-Taylor

John Shawe-Taylor

Alert button

Social AI and the Challenges of the Human-AI Ecosystem

Jun 23, 2023
Dino Pedreschi, Luca Pappalardo, Ricardo Baeza-Yates, Albert-Laszlo Barabasi, Frank Dignum, Virginia Dignum, Tina Eliassi-Rad, Fosca Giannotti, Janos Kertesz, Alistair Knott, Yannis Ioannidis, Paul Lukowicz, Andrea Passarella, Alex Sandy Pentland, John Shawe-Taylor, Alessandro Vespignani

The rise of large-scale socio-technical systems in which humans interact with artificial intelligence (AI) systems (including assistants and recommenders, in short AIs) multiplies the opportunity for the emergence of collective phenomena and tipping points, with unexpected, possibly unintended, consequences. For example, navigation systems' suggestions may create chaos if too many drivers are directed on the same route, and personalised recommendations on social media may amplify polarisation, filter bubbles, and radicalisation. On the other hand, we may learn how to foster the "wisdom of crowds" and collective action effects to face social and environmental challenges. In order to understand the impact of AI on socio-technical systems and design next-generation AIs that team with humans to help overcome societal problems rather than exacerbate them, we propose to build the foundations of Social AI at the intersection of Complex Systems, Network Science and AI. In this perspective paper, we discuss the main open questions in Social AI, outlining possible technical and scientific challenges and suggesting research avenues.

Viaarxiv icon

Exploration via Epistemic Value Estimation

Mar 07, 2023
Simon Schmitt, John Shawe-Taylor, Hado van Hasselt

Figure 1 for Exploration via Epistemic Value Estimation
Figure 2 for Exploration via Epistemic Value Estimation
Figure 3 for Exploration via Epistemic Value Estimation
Figure 4 for Exploration via Epistemic Value Estimation

How to efficiently explore in reinforcement learning is an open problem. Many exploration algorithms employ the epistemic uncertainty of their own value predictions -- for instance to compute an exploration bonus or upper confidence bound. Unfortunately the required uncertainty is difficult to estimate in general with function approximation. We propose epistemic value estimation (EVE): a recipe that is compatible with sequential decision making and with neural network function approximators. It equips agents with a tractable posterior over all their parameters from which epistemic value uncertainty can be computed efficiently. We use the recipe to derive an epistemic Q-Learning agent and observe competitive performance on a series of benchmarks. Experiments confirm that the EVE recipe facilitates efficient exploration in hard exploration tasks.

Viaarxiv icon

Can Population-based Engagement Improve Personalisation? A Novel Dataset and Experiments

Jun 22, 2022
Sahan Bulathwela, Meghana Verma, Maria Perez-Ortiz, Emine Yilmaz, John Shawe-Taylor

Figure 1 for Can Population-based Engagement Improve Personalisation? A Novel Dataset and Experiments
Figure 2 for Can Population-based Engagement Improve Personalisation? A Novel Dataset and Experiments
Figure 3 for Can Population-based Engagement Improve Personalisation? A Novel Dataset and Experiments
Figure 4 for Can Population-based Engagement Improve Personalisation? A Novel Dataset and Experiments

This work explores how population-based engagement prediction can address cold-start at scale in large learning resource collections. The paper introduces i) VLE, a novel dataset that consists of content and video based features extracted from publicly available scientific video lectures coupled with implicit and explicit signals related to learner engagement, ii) two standard tasks related to predicting and ranking context-agnostic engagement in video lectures with preliminary baselines and iii) a set of experiments that validate the usefulness of the proposed dataset. Our experimental results indicate that the newly proposed VLE dataset leads to building context-agnostic engagement prediction models that are significantly performant than ones based on previous datasets, mainly attributing to the increase of training examples. VLE dataset's suitability in building models towards Computer Science/ Artificial Intelligence education focused on e-learning/ MOOC use-cases is also evidenced. Further experiments in combining the built model with a personalising algorithm show promising improvements in addressing the cold-start problem encountered in educational recommenders. This is the largest and most diverse publicly available dataset to our knowledge that deals with learner engagement prediction tasks. The dataset, helper tools, descriptive statistics and example code snippets are available publicly.

* To be presented at International Conference for Educational Data Mining 2022 
Viaarxiv icon

TransductGAN: a Transductive Adversarial Model for Novelty Detection

Mar 30, 2022
Najiba Toron, Janaina Mourao-Miranda, John Shawe-Taylor

Figure 1 for TransductGAN: a Transductive Adversarial Model for Novelty Detection
Figure 2 for TransductGAN: a Transductive Adversarial Model for Novelty Detection
Figure 3 for TransductGAN: a Transductive Adversarial Model for Novelty Detection
Figure 4 for TransductGAN: a Transductive Adversarial Model for Novelty Detection

Novelty detection, a widely studied problem in machine learning, is the problem of detecting a novel class of data that has not been previously observed. A common setting for novelty detection is inductive whereby only examples of the negative class are available during training time. Transductive novelty detection on the other hand has only witnessed a recent surge in interest, it not only makes use of the negative class during training but also incorporates the (unlabeled) test set to detect novel examples. Several studies have emerged under the transductive setting umbrella that have demonstrated its advantage over its inductive counterpart. Depending on the assumptions about the data, these methods go by different names (e.g. transductive novelty detection, semi-supervised novelty detection, positive-unlabeled learning, out-of-distribution detection). With the use of generative adversarial networks (GAN), a segment of those studies have adopted a transductive setup in order to learn how to generate examples of the novel class. In this study, we propose TransductGAN, a transductive generative adversarial network that attempts to learn how to generate image examples from both the novel and negative classes by using a mixture of two Gaussians in the latent space. It achieves that by incorporating an adversarial autoencoder with a GAN network, the ability to generate examples of novel data points offers not only a visual representation of novelties, but also overcomes the hurdle faced by many inductive methods of how to tune the model hyperparameters at the decision rule level. Our model has shown superior performance over state-of-the-art inductive and transductive methods. Our study is fully reproducible with the code available publicly.

Viaarxiv icon

Controlling Confusion via Generalisation Bounds

Feb 11, 2022
Reuben Adams, John Shawe-Taylor, Benjamin Guedj

We establish new generalisation bounds for multiclass classification by abstracting to a more general setting of discretised error types. Extending the PAC-Bayes theory, we are hence able to provide fine-grained bounds on performance for multiclass classification, as well as applications to other learning problems including discretisation of regression losses. Tractable training objectives are derived from the bounds. The bounds are uniform over all weightings of the discretised error types and thus can be used to bound weightings not foreseen at training, including the full confusion matrix in the multiclass classification case.

* 31 pages 
Viaarxiv icon

Chaining Value Functions for Off-Policy Learning

Feb 02, 2022
Simon Schmitt, John Shawe-Taylor, Hado van Hasselt

Figure 1 for Chaining Value Functions for Off-Policy Learning
Figure 2 for Chaining Value Functions for Off-Policy Learning
Figure 3 for Chaining Value Functions for Off-Policy Learning
Figure 4 for Chaining Value Functions for Off-Policy Learning

To accumulate knowledge and improve its policy of behaviour, a reinforcement learning agent can learn `off-policy' about policies that differ from the policy used to generate its experience. This is important to learn counterfactuals, or because the experience was generated out of its own control. However, off-policy learning is non-trivial, and standard reinforcement-learning algorithms can be unstable and divergent. In this paper we discuss a novel family of off-policy prediction algorithms which are convergent by construction. The idea is to first learn on-policy about the data-generating behaviour, and then bootstrap an off-policy value estimate on this on-policy estimate, thereby constructing a value estimate that is partially off-policy. This process can be repeated to build a chain of value functions, each time bootstrapping a new estimate on the previous estimate in the chain. Each step in the chain is stable and hence the complete algorithm is guaranteed to be stable. Under mild conditions this comes arbitrarily close to the off-policy TD solution when we increase the length of the chain. Hence it can compute the solution even in cases where off-policy TD diverges. We prove that the proposed scheme is convergent and corresponds to an iterative decomposition of the inverse key matrix. Furthermore it can be interpreted as estimating a novel objective -- that we call a `k-step expedition' -- of following the target policy for finitely many steps before continuing indefinitely with the behaviour policy. Empirically we evaluate the idea on challenging MDPs such as Baird's counter example and observe favourable results.

Viaarxiv icon

Watch Less and Uncover More: Could Navigation Tools Help Users Search and Explore Videos?

Jan 10, 2022
Maria Perez-Ortiz, Sahan Bulathwela, Claire Dormann, Meghana Verma, Stefan Kreitmayer, Richard Noss, John Shawe-Taylor, Yvonne Rogers, Emine Yilmaz

Figure 1 for Watch Less and Uncover More: Could Navigation Tools Help Users Search and Explore Videos?
Figure 2 for Watch Less and Uncover More: Could Navigation Tools Help Users Search and Explore Videos?
Figure 3 for Watch Less and Uncover More: Could Navigation Tools Help Users Search and Explore Videos?
Figure 4 for Watch Less and Uncover More: Could Navigation Tools Help Users Search and Explore Videos?

Prior research has shown how 'content preview tools' improve speed and accuracy of user relevance judgements across different information retrieval tasks. This paper describes a novel user interface tool, the Content Flow Bar, designed to allow users to quickly identify relevant fragments within informational videos to facilitate browsing, through a cognitively augmented form of navigation. It achieves this by providing semantic "snippets" that enable the user to rapidly scan through video content. The tool provides visually-appealing pop-ups that appear in a time series bar at the bottom of each video, allowing to see in advance and at a glance how topics evolve in the content. We conducted a user study to evaluate how the tool changes the users search experience in video retrieval, as well as how it supports exploration and information seeking. The user questionnaire revealed that participants found the Content Flow Bar helpful and enjoyable for finding relevant information in videos. The interaction logs of the user study, where participants interacted with the tool for completing two informational tasks, showed that it holds promise for enhancing discoverability of content both across and within videos. This discovered potential could leverage a new generation of navigation tools in search and information retrieval.

* Published at the ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR'22) 
Viaarxiv icon

Semantic TrueLearn: Using Semantic Knowledge Graphs in Recommendation Systems

Dec 08, 2021
Sahan Bulathwela, María Pérez-Ortiz, Emine Yilmaz, John Shawe-Taylor

Figure 1 for Semantic TrueLearn: Using Semantic Knowledge Graphs in Recommendation Systems
Figure 2 for Semantic TrueLearn: Using Semantic Knowledge Graphs in Recommendation Systems
Figure 3 for Semantic TrueLearn: Using Semantic Knowledge Graphs in Recommendation Systems
Figure 4 for Semantic TrueLearn: Using Semantic Knowledge Graphs in Recommendation Systems

In informational recommenders, many challenges arise from the need to handle the semantic and hierarchical structure between knowledge areas. This work aims to advance towards building a state-aware educational recommendation system that incorporates semantic relatedness between knowledge topics, propagating latent information across semantically related topics. We introduce a novel learner model that exploits this semantic relatedness between knowledge components in learning resources using the Wikipedia link graph, with the aim to better predict learner engagement and latent knowledge in a lifelong learning scenario. In this sense, Semantic TrueLearn builds a humanly intuitive knowledge representation while leveraging Bayesian machine learning to improve the predictive performance of the educational engagement. Our experiments with a large dataset demonstrate that this new semantic version of TrueLearn algorithm achieves statistically significant improvements in terms of predictive performance with a simple extension that adds semantic awareness to the model.

* Presented at the First International Workshop on Joint Use of Probabilistic Graphical Models and Ontology at Conference on Knowledge Graph and Semantic Web 2021 
Viaarxiv icon

Could AI Democratise Education? Socio-Technical Imaginaries of an EdTech Revolution

Dec 03, 2021
Sahan Bulathwela, María Pérez-Ortiz, Catherine Holloway, John Shawe-Taylor

Artificial Intelligence (AI) in Education has been said to have the potential for building more personalised curricula, as well as democratising education worldwide and creating a Renaissance of new ways of teaching and learning. Millions of students are already starting to benefit from the use of these technologies, but millions more around the world are not. If this trend continues, the first delivery of AI in Education could be greater educational inequality, along with a global misallocation of educational resources motivated by the current technological determinism narrative. In this paper, we focus on speculating and posing questions around the future of AI in Education, with the aim of starting the pressing conversation that would set the right foundations for the new generation of education that is permeated by technology. This paper starts by synthesising how AI might change how we learn and teach, focusing specifically on the case of personalised learning companions, and then move to discuss some socio-technical features that will be crucial for avoiding the perils of these AI systems worldwide (and perhaps ensuring their success). This paper also discusses the potential of using AI together with free, participatory and democratic resources, such as Wikipedia, Open Educational Resources and open-source tools. We also emphasise the need for collectively designing human-centered, transparent, interactive and collaborative AI-based algorithms that empower and give complete agency to stakeholders, as well as support new emerging pedagogies. Finally, we ask what would it take for this educational revolution to provide egalitarian and empowering access to education, beyond any political, cultural, language, geographical and learning ability barriers.

* To be presented at Workshop on Machine Learning for the Developing World (ML4D) at the Conference on Neural Information Processing Systems 2021 
Viaarxiv icon