Abstract:This work presents a novel course titled The World of AI designed for first-year undergraduate engineering students with little to no prior exposure to AI. The central problem addressed by this course is that engineering students often lack foundational knowledge of AI and its broader societal implications at the outset of their academic journeys. We believe the way to address this gap is to design and deliver an interdisciplinary course that can a) be accessed by first-year undergraduate engineering students across any domain, b) enable them to understand the basic workings of AI systems sans mathematics, and c) make them appreciate AI's far-reaching implications on our lives. The course was divided into three modules co-delivered by faculty from both engineering and humanities. The planetary module explored AI's dual role as both a catalyst for sustainability and a contributor to environmental challenges. The societal impact module focused on AI biases and concerns around privacy and fairness. Lastly, the workplace module highlighted AI-driven job displacement, emphasizing the importance of adaptation. The novelty of this course lies in its interdisciplinary curriculum design and pedagogical approach, which combines technical instruction with societal discourse. Results revealed that students' comprehension of AI challenges improved across diverse metrics like (a) increased awareness of AI's environmental impact, and (b) efficient corrective solutions for AI fairness. Furthermore, it also indicated the evolution in students' perception of AI's transformative impact on our lives.
Abstract:In recent years, the use of bio-sensing signals such as electroencephalogram (EEG), electrocardiogram (ECG), etc. have garnered interest towards applications in affective computing. The parallel trend of deep-learning has led to a huge leap in performance towards solving various vision-based research problems such as object detection. Yet, these advances in deep-learning have not adequately translated into bio-sensing research. This work applies novel deep-learning-based methods to various bio-sensing and video data of four publicly available multi-modal emotion datasets. For each dataset, we first individually evaluate the emotion-classification performance obtained by each modality. We then evaluate the performance obtained by fusing the features from these modalities. We show that our algorithms outperform the results reported by other studies for emotion/valence/arousal/liking classification on DEAP and MAHNOB-HCI datasets and set up benchmarks for the newer AMIGOS and DREAMER datasets. We also evaluate the performance of our algorithms by combining the datasets and by using transfer learning to show that the proposed method overcomes the inconsistencies between the datasets. Hence, we do a thorough analysis of multi-modal affective data from more than 120 subjects and 2,800 trials. Finally, utilizing a convolution-deconvolution network, we propose a new technique towards identifying salient brain regions corresponding to various affective states.