Abstract:Solar flares are energetic events in the solar atmosphere that are often linked with solar radio bursts (SRBs). SRBs are observed at metric to decametric wavelengths and are classified into five spectral classes (Type I--V) based on their signature in dynamic spectra. The automatic detection and classification of SRBs is a challenge due to their heterogeneous form. Near-realtime detection and classification of SRBs has become a necessity in recent years due to large data rates generated by advanced radio telescopes such as the LOw Frequency ARray (LOFAR). In this study, we implement congruent deep learning models to automatically detect and classify Type III SRBs. We generated simulated Type III SRBs, which were comparable to Type IIIs seen in real observations, using a deep learning method known as Generative Adversarial Network (GAN). This simulated data was combined with observations from LOFAR to produce a training set that was used to train an object detection model known as YOLOv2 (You Only Look Once). Using this congruent deep learning model system, we can accurately detect Type III SRBs at a mean Average Precision (mAP) value of 77.71%.
Abstract:Solar Radio Bursts (SRBs) are generally observed in dynamic spectra and have five major spectral classes, labelled Type I to Type V depending on their shape and extent in frequency and time. Due to their complex characterisation, a challenge in solar radio physics is the automatic detection and classification of such radio bursts. Classification of SRBs has become fundamental in recent years due to large data rates generated by advanced radio telescopes such as the LOw-Frequency ARray, (LOFAR). Current state-of-the-art algorithms implement the Hough or Radon transform as a means of detecting predefined parametric shapes in images. These algorithms achieve up to 84% accuracy, depending on the Type of radio burst being classified. Other techniques include procedures that rely on Constant-FalseAlarm-Rate detection, which is essentially detection of radio bursts using a de-noising and adaptive threshold in dynamic spectra. It works well for a variety of different Types of radio bursts and achieves an accuracy of up to 70%. In this research, we are introducing a methodology named You Only Look Once v2 (YOLOv2) for solar radio burst classification. By using Type III simulation methods we can train the algorithm to classify real Type III solar radio bursts in real-time at an accu
Abstract:Eye-based information channels include the pupils, gaze, saccades, fixational movements, and numerous forms of eye opening and closure. Pupil size variation indicates cognitive load and emotion, while a person's gaze direction is said to be congruent with the motivation to approach or avoid stimuli. The eyelids are involved in facial expressions that can encode basic emotions. Additionally, eye-based cues can have implications for human annotators of emotions or feelings. Despite these facts, the use of eye-based cues in affective computing is in its infancy, however, and this work is intended to start to address this. Eye-based feature sets, incorporating data from all of the aforementioned information channels, that can be estimated from video are proposed. Feature set refinement is provided by way of continuous arousal and valence learning and prediction experiments on the RECOLA validation set. The eye-based features are then combined with a speech feature set to provide confirmation of their usefulness and assess affect prediction performance compared with group-of-humans-level performance on the RECOLA test set. The core contribution of this paper, a refined eye-based feature set, is shown to provide benefits for affect prediction. It is hoped that this work stimulates further research into eye-based affective computing.