Alert button
Picture for Nicolas Langer

Nicolas Langer

Alert button

An Interpretable and Attention-based Method for Gaze Estimation Using Electroencephalography

Aug 09, 2023
Nina Weng, Martyna Plomecka, Manuel Kaufmann, Ard Kastrati, Roger Wattenhofer, Nicolas Langer

Figure 1 for An Interpretable and Attention-based Method for Gaze Estimation Using Electroencephalography
Figure 2 for An Interpretable and Attention-based Method for Gaze Estimation Using Electroencephalography
Figure 3 for An Interpretable and Attention-based Method for Gaze Estimation Using Electroencephalography
Figure 4 for An Interpretable and Attention-based Method for Gaze Estimation Using Electroencephalography

Eye movements can reveal valuable insights into various aspects of human mental processes, physical well-being, and actions. Recently, several datasets have been made available that simultaneously record EEG activity and eye movements. This has triggered the development of various methods to predict gaze direction based on brain activity. However, most of these methods lack interpretability, which limits their technology acceptance. In this paper, we leverage a large data set of simultaneously measured Electroencephalography (EEG) and Eye tracking, proposing an interpretable model for gaze estimation from EEG data. More specifically, we present a novel attention-based deep learning framework for EEG signal analysis, which allows the network to focus on the most relevant information in the signal and discard problematic channels. Additionally, we provide a comprehensive evaluation of the presented framework, demonstrating its superiority over current methods in terms of accuracy and robustness. Finally, the study presents visualizations that explain the results of the analysis and highlights the potential of attention mechanism for improving the efficiency and effectiveness of EEG data analysis in a variety of applications.

Viaarxiv icon

Electrode Clustering and Bandpass Analysis of EEG Data for Gaze Estimation

Feb 19, 2023
Ard Kastrati, Martyna Beata Plomecka, Joël Küchler, Nicolas Langer, Roger Wattenhofer

Figure 1 for Electrode Clustering and Bandpass Analysis of EEG Data for Gaze Estimation
Figure 2 for Electrode Clustering and Bandpass Analysis of EEG Data for Gaze Estimation
Figure 3 for Electrode Clustering and Bandpass Analysis of EEG Data for Gaze Estimation
Figure 4 for Electrode Clustering and Bandpass Analysis of EEG Data for Gaze Estimation

In this study, we validate the findings of previously published papers, showing the feasibility of an Electroencephalography (EEG) based gaze estimation. Moreover, we extend previous research by demonstrating that with only a slight drop in model performance, we can significantly reduce the number of electrodes, indicating that a high-density, expensive EEG cap is not necessary for the purposes of EEG-based eye tracking. Using data-driven approaches, we establish which electrode clusters impact gaze estimation and how the different types of EEG data preprocessing affect the models' performance. Finally, we also inspect which recorded frequencies are most important for the defined tasks.

* Gaze Meets Machine Learning Workshop (GMML@NeurIPS), New Orleans, Louisiana, USA, December 2022 
Viaarxiv icon

Detection of ADHD based on Eye Movements during Natural Viewing

Jul 14, 2022
Shuwen Deng, Paul Prasse, David R. Reich, Sabine Dziemian, Maja Stegenwallner-Schütz, Daniel Krakowczyk, Silvia Makowski, Nicolas Langer, Tobias Scheffer, Lena A. Jäger

Figure 1 for Detection of ADHD based on Eye Movements during Natural Viewing
Figure 2 for Detection of ADHD based on Eye Movements during Natural Viewing
Figure 3 for Detection of ADHD based on Eye Movements during Natural Viewing
Figure 4 for Detection of ADHD based on Eye Movements during Natural Viewing

Attention-deficit/hyperactivity disorder (ADHD) is a neurodevelopmental disorder that is highly prevalent and requires clinical specialists to diagnose. It is known that an individual's viewing behavior, reflected in their eye movements, is directly related to attentional mechanisms and higher-order cognitive processes. We therefore explore whether ADHD can be detected based on recorded eye movements together with information about the video stimulus in a free-viewing task. To this end, we develop an end-to-end deep learning-based sequence model which we pre-train on a related task for which more data are available. We find that the method is in fact able to detect ADHD and outperforms relevant baselines. We investigate the relevance of the input features in an ablation study. Interestingly, we find that the model's performance is closely related to the content of the video, which provides insights for future experimental designs.

* Pre-print for Proceedings of the European Conference on Machine Learning, 2022 
Viaarxiv icon

A Deep Learning Approach for the Segmentation of Electroencephalography Data in Eye Tracking Applications

Jun 17, 2022
Lukas Wolf, Ard Kastrati, Martyna Beata Płomecka, Jie-Ming Li, Dustin Klebe, Alexander Veicht, Roger Wattenhofer, Nicolas Langer

Figure 1 for A Deep Learning Approach for the Segmentation of Electroencephalography Data in Eye Tracking Applications
Figure 2 for A Deep Learning Approach for the Segmentation of Electroencephalography Data in Eye Tracking Applications
Figure 3 for A Deep Learning Approach for the Segmentation of Electroencephalography Data in Eye Tracking Applications
Figure 4 for A Deep Learning Approach for the Segmentation of Electroencephalography Data in Eye Tracking Applications

The collection of eye gaze information provides a window into many critical aspects of human cognition, health and behaviour. Additionally, many neuroscientific studies complement the behavioural information gained from eye tracking with the high temporal resolution and neurophysiological markers provided by electroencephalography (EEG). One of the essential eye-tracking software processing steps is the segmentation of the continuous data stream into events relevant to eye-tracking applications, such as saccades, fixations, and blinks. Here, we introduce DETRtime, a novel framework for time-series segmentation that creates ocular event detectors that do not require additionally recorded eye-tracking modality and rely solely on EEG data. Our end-to-end deep learning-based framework brings recent advances in Computer Vision to the forefront of the times series segmentation of EEG data. DETRtime achieves state-of-the-art performance in ocular event detection across diverse eye-tracking experiment paradigms. In addition to that, we provide evidence that our model generalizes well in the task of EEG sleep stage segmentation.

* 21 pages, Published at the Proceedings of the 39th International Conference on Machine Learning (ICML) 2022 
Viaarxiv icon

Reading Task Classification Using EEG and Eye-Tracking Data

Dec 12, 2021
Nora Hollenstein, Marius Tröndle, Martyna Plomecka, Samuel Kiegeland, Yilmazcan Özyurt, Lena A. Jäger, Nicolas Langer

Figure 1 for Reading Task Classification Using EEG and Eye-Tracking Data
Figure 2 for Reading Task Classification Using EEG and Eye-Tracking Data
Figure 3 for Reading Task Classification Using EEG and Eye-Tracking Data
Figure 4 for Reading Task Classification Using EEG and Eye-Tracking Data

The Zurich Cognitive Language Processing Corpus (ZuCo) provides eye-tracking and EEG signals from two reading paradigms, normal reading and task-specific reading. We analyze whether machine learning methods are able to classify these two tasks using eye-tracking and EEG features. We implement models with aggregated sentence-level features as well as fine-grained word-level features. We test the models in within-subject and cross-subject evaluation scenarios. All models are tested on the ZuCo 1.0 and ZuCo 2.0 data subsets, which are characterized by differing recording procedures and thus allow for different levels of generalizability. Finally, we provide a series of control experiments to analyze the results in more detail.

Viaarxiv icon

EEGEyeNet: a Simultaneous Electroencephalography and Eye-tracking Dataset and Benchmark for Eye Movement Prediction

Nov 10, 2021
Ard Kastrati, Martyna Beata Płomecka, Damián Pascual, Lukas Wolf, Victor Gillioz, Roger Wattenhofer, Nicolas Langer

Figure 1 for EEGEyeNet: a Simultaneous Electroencephalography and Eye-tracking Dataset and Benchmark for Eye Movement Prediction
Figure 2 for EEGEyeNet: a Simultaneous Electroencephalography and Eye-tracking Dataset and Benchmark for Eye Movement Prediction
Figure 3 for EEGEyeNet: a Simultaneous Electroencephalography and Eye-tracking Dataset and Benchmark for Eye Movement Prediction
Figure 4 for EEGEyeNet: a Simultaneous Electroencephalography and Eye-tracking Dataset and Benchmark for Eye Movement Prediction

We present a new dataset and benchmark with the goal of advancing research in the intersection of brain activities and eye movements. Our dataset, EEGEyeNet, consists of simultaneous Electroencephalography (EEG) and Eye-tracking (ET) recordings from 356 different subjects collected from three different experimental paradigms. Using this dataset, we also propose a benchmark to evaluate gaze prediction from EEG measurements. The benchmark consists of three tasks with an increasing level of difficulty: left-right, angle-amplitude and absolute position. We run extensive experiments on this benchmark in order to provide solid baselines, both based on classical machine learning models and on large neural networks. We release our complete code and data and provide a simple and easy-to-use interface to evaluate new methods.

* Published at NeurIPS 2021 Datasets and Benchmarks Track 
Viaarxiv icon

Decoding EEG Brain Activity for Multi-Modal Natural Language Processing

Feb 17, 2021
Nora Hollenstein, Cedric Renggli, Benjamin Glaus, Maria Barrett, Marius Troendle, Nicolas Langer, Ce Zhang

Figure 1 for Decoding EEG Brain Activity for Multi-Modal Natural Language Processing
Figure 2 for Decoding EEG Brain Activity for Multi-Modal Natural Language Processing
Figure 3 for Decoding EEG Brain Activity for Multi-Modal Natural Language Processing
Figure 4 for Decoding EEG Brain Activity for Multi-Modal Natural Language Processing

Until recently, human behavioral data from reading has mainly been of interest to researchers to understand human cognition. However, these human language processing signals can also be beneficial in machine learning-based natural language processing tasks. Using EEG brain activity to this purpose is largely unexplored as of yet. In this paper, we present the first large-scale study of systematically analyzing the potential of EEG brain activity data for improving natural language processing tasks, with a special focus on which features of the signal are most beneficial. We present a multi-modal machine learning architecture that learns jointly from textual input as well as from EEG features. We find that filtering the EEG signals into frequency bands is more beneficial than using the broadband signal. Moreover, for a range of word embedding types, EEG data improves binary and ternary sentiment classification and outperforms multiple baselines. For more complex tasks such as relation detection, further research is needed. Finally, EEG data shows to be particularly promising when limited training data is available.

Viaarxiv icon

ZuCo 2.0: A Dataset of Physiological Recordings During Natural Reading and Annotation

Dec 02, 2019
Nora Hollenstein, Marius Troendle, Ce Zhang, Nicolas Langer

Figure 1 for ZuCo 2.0: A Dataset of Physiological Recordings During Natural Reading and Annotation
Figure 2 for ZuCo 2.0: A Dataset of Physiological Recordings During Natural Reading and Annotation
Figure 3 for ZuCo 2.0: A Dataset of Physiological Recordings During Natural Reading and Annotation
Figure 4 for ZuCo 2.0: A Dataset of Physiological Recordings During Natural Reading and Annotation

We recorded and preprocessed ZuCo 2.0, a new dataset of simultaneous eye-tracking and electroencephalography during natural reading and during annotation. This corpus contains gaze and brain activity data of 739 sentences, 349 in a normal reading paradigm and 390 in a task-specific paradigm, in which the 18 participants actively search for a semantic relation type in the given sentences as a linguistic annotation task. This new dataset complements ZuCo 1.0 by providing experiments designed to analyze the differences in cognitive processing between natural reading and annotation. The data is freely available here: url{https://osf.io/2urht/

Viaarxiv icon

CogniVal: A Framework for Cognitive Word Embedding Evaluation

Oct 29, 2019
Nora Hollenstein, Antonio de la Torre, Nicolas Langer, Ce Zhang

Figure 1 for CogniVal: A Framework for Cognitive Word Embedding Evaluation
Figure 2 for CogniVal: A Framework for Cognitive Word Embedding Evaluation
Figure 3 for CogniVal: A Framework for Cognitive Word Embedding Evaluation
Figure 4 for CogniVal: A Framework for Cognitive Word Embedding Evaluation

An interesting method of evaluating word representations is by how much they reflect the semantic representations in the human brain. However, most, if not all, previous works only focus on small datasets and a single modality. In this paper, we present the first multi-modal framework for evaluating English word representations based on cognitive lexical semantics. Six types of word embeddings are evaluated by fitting them to 15 datasets of eye-tracking, EEG and fMRI signals recorded during language processing. To achieve a global score over all evaluation hypotheses, we apply statistical significance testing accounting for the multiple comparisons problem. This framework is easily extensible and available to include other intrinsic and extrinsic evaluation methods. We find strong correlations in the results between cognitive datasets, across recording modalities and to their performance on extrinsic NLP tasks.

* accepted at CoNLL 2019 
Viaarxiv icon