Abstract:The emergence of generative AI (GAI) has sparked diverse opinions regarding its appropriate use across various domains, including education. This pilot study investigates university students' perceptions of GAI in higher education classrooms, aiming to lay the groundwork for understanding these attitudes. With a participation rate of approximately 4.4%, the study highlights the challenges of engaging students in GAI-related research and underscores the need for larger sample sizes in future studies. By gaining insights into student perspectives, instructors can better prepare to integrate discussions of GAI into their classrooms, fostering informed and critical engagement with this transformative technology.




Abstract:Shape metrics for objects in high dimensions remain sparse. Those that do exist, such as hyper-volume, remain limited to objects that are better understood such as Platonic solids and $n$-Cubes. Further, understanding objects of ill-defined shapes in higher dimensions is ambiguous at best. Past work does not provide a single number to give a qualitative understanding of an object. For example, the eigenvalues from principal component analysis results in $n$ metrics to describe the shape of an object. Therefore, we need a single number which can discriminate objects with different shape from one another. Previous work has developed shape metrics for specific dimensions such as two or three dimensions. However, there is an opportunity to develop metrics for any desired dimension. To that end, we present two new shape metrics for objects in a given number of dimensions: hyper-Sphericity and hyper-Shape Proportion (SP). We explore the proprieties of these metrics on a number of different shapes including $n$-balls. We then connect these metrics to applications of analyzing the shape of multidimensional data such as the popular Iris dataset.




Abstract:White Blood Cell (WBC) Leukaemia is detected through image-based classification. Convolutional Neural Networks are used to learn the features needed to classify images of cells a malignant or healthy. However, this type of model requires learning a large number of parameters and is difficult to interpret and explain. Explainable AI (XAI) attempts to alleviate this issue by providing insights to how models make decisions. Therefore, we present an XAI model which uses only 24 explainable and interpretable features and is highly competitive to other approaches by outperforming them by about 4.38\%. Further, our approach provides insight into which variables are the most important for the classification of the cells. This insight provides evidence that when labs treat the WBCs differently, the importance of various metrics changes substantially. Understanding the important features for classification is vital in medical imaging diagnosis and, by extension, understanding the AI models built in scientific pursuits.




Abstract:Traditional machine learning (ML) algorithms, such as multiple regression, require human analysts to make decisions on how to treat the data. These decisions can make the model building process subjective and difficult to replicate for those who did not build the model. Deep learning approaches benefit by allowing the model to learn what features are important once the human analyst builds the architecture. Thus, a method for automating certain human decisions for traditional ML modeling would help to improve the reproducibility and remove subjective aspects of the model building process. To that end, we propose to use shape metrics to describe 2D data to help make analyses more explainable and interpretable. The proposed approach provides a foundation to help automate various aspects of model building in an interpretable and explainable fashion. This is particularly important in applications in the medical community where the `right to explainability' is crucial. We provide various simulated data sets ranging from probability distributions, functions, and model quality control checks (such as QQ-Plots and residual analyses from ordinary least squares) to showcase the breadth of this approach.