Abstract:Since the public release of Chat Generative Pre-Trained Transformer (ChatGPT), extensive discourse has emerged concerning the potential advantages and challenges of integrating Generative Artificial Intelligence (GenAI) into education. In the realm of information systems, research on technology adoption is crucial for understanding the diverse factors influencing the uptake of specific technologies. Theoretical frameworks, refined and validated over decades, serve as guiding tools to elucidate the individual and organizational dynamics, obstacles, and perceptions surrounding technology adoption. However, while several models have been proposed, they often prioritize elucidating the factors that facilitate acceptance over those that impede it, typically focusing on the student perspective and leaving a gap in empirical evidence regarding educators viewpoints. Given the pivotal role educators play in higher education, this study aims to develop a theoretical model to empirically predict the barriers preventing educators from adopting GenAI in their classrooms. Acknowledging the lack of theoretical models tailored to identifying such barriers, our approach is grounded in the Innovation Resistance Theory (IRT) framework and augmented with constructs from the Technology-Organization-Environment (TOE) framework. This model is transformed into a measurement instrument employing a quantitative approach, complemented by a qualitative approach to enrich the analysis and uncover concerns related to GenAI adoption in the higher education domain.
Abstract:Efforts directed towards promoting Open Government Data (OGD) have gained significant traction across various governmental tiers since the mid-2000s. As more datasets are published on OGD portals, finding specific data becomes harder, leading to information overload. Complete and accurate documentation of datasets, including association of proper tags with datasets is key to improving dataset findability and accessibility. Analysis conducted on the Estonian Open Data Portal, revealed that 11% datasets have no associated tags, while 26% had only one tag assigned to them, which underscores challenges in data findability and accessibility within the portal, which, according to the recent Open Data Maturity Report, is considered trend-setter. The aim of this study is to propose an automated solution to tagging datasets to improve data findability on OGD portals. This paper presents Tagify - a prototype of tagging interface that employs large language models (LLM) such as GPT-3.5-turbo and GPT-4 to automate dataset tagging, generating tags for datasets in English and Estonian, thereby augmenting metadata preparation by data publishers and improving data findability on OGD portals by data users. The developed solution was evaluated by users and their feedback was collected to define an agenda for future prototype improvements.
Abstract:In the contemporary data-driven landscape, ensuring data quality (DQ) is crucial for deriving actionable insights from vast data repositories. The objective of this study is to explore the potential for automating data quality management within data warehouses as data repository commonly used by large organizations. By conducting a systematic review of existing DQ tools available in the market and academic literature, the study assesses their capability to automatically detect and enforce data quality rules. The review encompassed 151 tools from various sources, revealing that most current tools focus on data cleansing and fixing in domain-specific databases rather than data warehouses. Only a limited number of tools, specifically ten, demonstrated the capability to detect DQ rules, not to mention implementing this in data warehouses. The findings underscore a significant gap in the market and academic research regarding AI-augmented DQ rule detection in data warehouses. This paper advocates for further development in this area to enhance the efficiency of DQ management processes, reduce human workload, and lower costs. The study highlights the necessity of advanced tools for automated DQ rule detection, paving the way for improved practices in data quality management tailored to data warehouse environments. The study can guide organizations in selecting data quality tool that would meet their requirements most.
Abstract:Estonia's digital government success has received global acclaim, yet its Open Government Data (OGD) initiatives, especially at the local level, encounter persistent challenges. Despite significant progress of national OGD initiative in OGD rankings, local governments lag in OGD provision. This study aims to examine barriers hindering municipalities from openly sharing OGD. Employing a qualitative approach through interviews with Estonian municipalities and drawing on the OGD-adapted Innovation Resistance Theory model, the study sheds light on barriers impeding OGD sharing. Practical recommendations are proposed to bridge the gap between national policies and local implementation, including enhancing awareness, improving data governance frameworks, and fostering collaboration be-tween local and national authorities. By addressing overlooked weaknesses in the Estonian open data ecosystem and providing actionable recommendations, this research contributes to a more resilient and sustainable open data ecosystem. Additionally, by validating the OGD-adapted Innovation Resistance Theory model and proposing a revised version tailored for local government contexts, the study advances theoretical frameworks for understanding data sharing resistance. Ultimately, this study serves as a call to action for policymakers and practitioners to prioritize local OGD initiatives.
Abstract:Public data ecosystems (PDEs) represent complex socio-technical systems crucial for optimizing data use in the public sector and outside it. Recognizing their multifaceted nature, previous research pro-posed a six-generation Evolutionary Model of Public Data Ecosystems (EMPDE). Designed as a result of a systematic literature review on the topic spanning three decade, this model, while theoretically robust, necessitates empirical validation to enhance its practical applicability. This study addresses this gap by validating the theoretical model through a real-life examination in five European countries - Latvia, Serbia, Czech Republic, Spain, and Poland. This empirical validation provides insights into PDEs dynamics and variations of implementations across contexts, particularly focusing on the 6th generation of forward-looking PDE generation named "Intelligent Public Data Generation" that represents a paradigm shift driven by emerging technologies such as cloud computing, Artificial Intelligence, Natural Language Processing tools, Generative AI, and Large Language Models (LLM) with potential to contribute to both automation and augmentation of business processes within these ecosystems. By transcending their traditional status as a mere component, evolving into both an actor and a stakeholder simultaneously, these technologies catalyze innovation and progress, enhancing PDE management strategies to align with societal, regulatory, and technical imperatives in the digital era.