Abstract:Conspiracy theories can threaten society by spreading misinformation, deepening polarization, and eroding trust in democratic institutions. Social media often fuels the spread of conspiracies, primarily driven by two key actors: Superspreaders -- influential individuals disseminating conspiracy content at disproportionately high rates, and Bots -- automated accounts designed to amplify conspiracies strategically. To counter the spread of conspiracy theories, it is critical to both identify these actors and to better understand their behavior. However, a systematic analysis of these actors as well as real-world-applicable identification methods are still lacking. In this study, we leverage over seven million tweets from the COVID-19 pandemic to analyze key differences between Human Superspreaders and Bots across dimensions such as linguistic complexity, toxicity, and hashtag usage. Our analysis reveals distinct communication strategies: Superspreaders tend to use more complex language and substantive content while relying less on structural elements like hashtags and emojis, likely to enhance credibility and authority. By contrast, Bots favor simpler language and strategic cross-usage of hashtags, likely to increase accessibility, facilitate infiltration into trending discussions, and amplify reach. To counter both Human Superspreaders and Bots, we propose and evaluate 27 novel metrics for quantifying the severity of conspiracy theory spread. Our findings highlight the effectiveness of an adapted H-Index for computationally feasible identification of Human Superspreaders. By identifying behavioral patterns unique to Human Superspreaders and Bots as well as providing suitable identification methods, this study provides a foundation for mitigation strategies, including platform moderation policies, temporary and permanent account suspensions, and public awareness campaigns.


Abstract:We show how to achieve fast autocompletion for SPARQL queries on very large knowledge bases. At any position in the body of a SPARQL query, the autocompletion suggests matching subjects, predicates, or objects. The suggestions are context-sensitive in the sense that they lead to a non-empty result and are ranked by their relevance to the part of the query already typed. The suggestions can be narrowed down by prefix search on the names and aliases of the desired subject, predicate, or object. All suggestions are themselves obtained via SPARQL queries, which we call autocompletion queries. For existing SPARQL engines, these queries are impractically slow on large knowledge bases. We present various algorithmic and engineering improvements of an existing SPARQL engine such that these autocompletion queries are executed efficiently. We provide an extensive evaluation of a variety of suggestion methods on three large knowledge bases, including Wikidata (6.9B triples). We explore the trade-off between the relevance of the suggestions and the processing time of the autocompletion queries. We compare our results with two widely used SPARQL engines, Virtuoso and Blazegraph. On Wikidata, we achieve fully sensitive suggestions with sub-second response times for over 90% of a large and diverse set of thousands of autocompletion queries. Materials for full reproducibility, an interactive evaluation web app, and a demo are available on: https://ad.informatik.uni-freiburg.de/publications .