Abstract:Despite their cultural and historical significance, Black digital archives continue to be a structurally underrepresented area in AI research and infrastructure. This is especially evident in efforts to digitize historical Black newspapers, where inconsistent typography, visual degradation, and limited annotated layout data hinder accurate transcription, despite the availability of various systems that claim to handle optical character recognition (OCR) well. In this short paper, we present a layout-aware OCR pipeline tailored for Black newspaper archives and introduce an unsupervised evaluation framework suited to low-resource archival contexts. Our approach integrates synthetic layout generation, model pretraining on augmented data, and a fusion of state-of-the-art You Only Look Once (YOLO) detectors. We used three annotation-free evaluation metrics, the Semantic Coherence Score (SCS), Region Entropy (RE), and Textual Redundancy Score (TRS), which quantify linguistic fluency, informational diversity, and redundancy across OCR regions. Our evaluation on a 400-page dataset from ten Black newspaper titles demonstrates that layout-aware OCR improves structural diversity and reduces redundancy compared to full-page baselines, with modest trade-offs in coherence. Our results highlight the importance of respecting cultural layout logic in AI-driven document understanding and lay the foundation for future community-driven and ethically grounded archival AI systems.

Abstract:How might we use cognitive modeling to consider the ways in which antiblackness, and racism more broadly, impact the design and development of AI systems? We provide a discussion and an example towards an answer to this question. We use the ACT-R/{\Phi} cognitive architecture and an existing knowledge graph system, ConceptNet, to consider this question not only from a cognitive and sociocultural perspective, but also from a physiological perspective. In addition to using a cognitive modeling as a means to explore how antiblackness may manifest in the design and development of AI systems (particularly from a software engineering perspective), we also introduce connections between antiblackness, the Human, and computational cognitive modeling. We argue that the typical eschewing of sociocultural processes and knowledge structures in cognitive architectures and cognitive modeling implicitly furthers a colorblind approach to cognitive modeling and hides sociocultural context that is always present in human behavior and affects cognitive processes.




Abstract:In this paper, we argue that AI ethics must move beyond the concepts of race-based representation and bias, and towards those that probe the deeper relations that impact how these systems are designed, developed, and deployed. Many recent discussions on ethical considerations of bias in AI systems have centered on racial bias. We contend that antiblackness in AI requires more of an examination of the ontological space that provides a foundation for the design, development, and deployment of AI systems. We examine what this contention means from the perspective of the sociocultural context in which AI systems are designed, developed, and deployed and focus on intersections with anti-Black racism (antiblackness). To bring these multiple perspectives together and show an example of antiblackness in the face of attempts at de-biasing, we discuss results from auditing an existing open-source semantic network (ConceptNet). We use this discussion to further contextualize antiblackness in design, development, and deployment of AI systems and suggest questions one may ask when attempting to combat antiblackness in AI systems.