Alert button
Picture for Filippo Menczer

Filippo Menczer

Alert button

Factuality Challenges in the Era of Large Language Models

Oct 10, 2023
Isabelle Augenstein, Timothy Baldwin, Meeyoung Cha, Tanmoy Chakraborty, Giovanni Luca Ciampaglia, David Corney, Renee DiResta, Emilio Ferrara, Scott Hale, Alon Halevy, Eduard Hovy, Heng Ji, Filippo Menczer, Ruben Miguez, Preslav Nakov, Dietram Scheufele, Shivam Sharma, Giovanni Zagni

The emergence of tools based on Large Language Models (LLMs), such as OpenAI's ChatGPT, Microsoft's Bing Chat, and Google's Bard, has garnered immense public attention. These incredibly useful, natural-sounding tools mark significant advances in natural language generation, yet they exhibit a propensity to generate false, erroneous, or misleading content -- commonly referred to as "hallucinations." Moreover, LLMs can be exploited for malicious applications, such as generating false but credible-sounding content and profiles at scale. This poses a significant challenge to society in terms of the potential deception of users and the increasing dissemination of inaccurate information. In light of these risks, we explore the kinds of technological innovations, regulatory reforms, and AI literacy initiatives needed from fact-checkers, news organizations, and the broader research and policy communities. By identifying the risks, the imminent threats, and some viable solutions, we seek to shed light on navigating various aspects of veracity in the era of generative AI.

* Our article offers a comprehensive examination of the challenges and risks associated with Large Language Models (LLMs), focusing on their potential impact on the veracity of information in today's digital landscape 
Viaarxiv icon

Artificial intelligence is ineffective and potentially harmful for fact checking

Sep 01, 2023
Matthew R. DeVerna, Harry Yaojun Yan, Kai-Cheng Yang, Filippo Menczer

Figure 1 for Artificial intelligence is ineffective and potentially harmful for fact checking
Figure 2 for Artificial intelligence is ineffective and potentially harmful for fact checking
Figure 3 for Artificial intelligence is ineffective and potentially harmful for fact checking

Fact checking can be an effective strategy against misinformation, but its implementation at scale is impeded by the overwhelming volume of information online. Recent artificial intelligence (AI) language models have shown impressive ability in fact-checking tasks, but how humans interact with fact-checking information provided by these models is unclear. Here we investigate the impact of fact checks generated by a popular AI model on belief in, and sharing intent of, political news in a preregistered randomized control experiment. Although the AI performs reasonably well in debunking false headlines, we find that it does not significantly affect participants' ability to discern headline accuracy or share accurate news. However, the AI fact-checker is harmful in specific cases: it decreases beliefs in true headlines that it mislabels as false and increases beliefs for false headlines that it is unsure about. On the positive side, the AI increases sharing intents for correctly labeled true headlines. When participants are given the option to view AI fact checks and choose to do so, they are significantly more likely to share both true and false news but only more likely to believe false news. Our findings highlight an important source of potential harm stemming from AI applications and underscore the critical need for policies to prevent or mitigate such unintended consequences.

Viaarxiv icon

Anatomy of an AI-powered malicious social botnet

Jul 30, 2023
Kai-Cheng Yang, Filippo Menczer

Figure 1 for Anatomy of an AI-powered malicious social botnet
Figure 2 for Anatomy of an AI-powered malicious social botnet
Figure 3 for Anatomy of an AI-powered malicious social botnet
Figure 4 for Anatomy of an AI-powered malicious social botnet

Large language models (LLMs) exhibit impressive capabilities in generating realistic text across diverse subjects. Concerns have been raised that they could be utilized to produce fake content with a deceptive intention, although evidence thus far remains anecdotal. This paper presents a case study about a Twitter botnet that appears to employ ChatGPT to generate human-like content. Through heuristics, we identify 1,140 accounts and validate them via manual annotation. These accounts form a dense cluster of fake personas that exhibit similar behaviors, including posting machine-generated content and stolen images, and engage with each other through replies and retweets. ChatGPT-generated content promotes suspicious websites and spreads harmful comments. While the accounts in the AI botnet can be detected through their coordination patterns, current state-of-the-art LLM content classifiers fail to discriminate between them and human accounts in the wild. These findings highlight the threats posed by AI-enabled social bots.

Viaarxiv icon

Large language models can rate news outlet credibility

Apr 01, 2023
Kai-Cheng Yang, Filippo Menczer

Figure 1 for Large language models can rate news outlet credibility
Figure 2 for Large language models can rate news outlet credibility
Figure 3 for Large language models can rate news outlet credibility
Figure 4 for Large language models can rate news outlet credibility

Although large language models (LLMs) have shown exceptional performance in various natural language processing tasks, they are prone to hallucinations. State-of-the-art chatbots, such as the new Bing, attempt to mitigate this issue by gathering information directly from the internet to ground their answers. In this setting, the capacity to distinguish trustworthy sources is critical for providing appropriate accuracy contexts to users. Here we assess whether ChatGPT, a prominent LLM, can evaluate the credibility of news outlets. With appropriate instructions, ChatGPT can provide ratings for a diverse set of news outlets, including those in non-English languages and satirical sources, along with contextual explanations. Our results show that these ratings correlate with those from human experts (Spearmam's $\rho=0.54, p<0.001$). These findings suggest that LLMs could be an affordable reference for credibility ratings in fact-checking applications. Future LLMs should enhance their alignment with human expert judgments of source credibility to improve information accuracy.

* 10 pages, 3 figures 
Viaarxiv icon

Detection of Novel Social Bots by Ensembles of Specialized Classifiers

Jun 11, 2020
Mohsen Sayyadiharikandeh, Onur Varol, Kai-Cheng Yang, Alessandro Flammini, Filippo Menczer

Figure 1 for Detection of Novel Social Bots by Ensembles of Specialized Classifiers
Figure 2 for Detection of Novel Social Bots by Ensembles of Specialized Classifiers
Figure 3 for Detection of Novel Social Bots by Ensembles of Specialized Classifiers
Figure 4 for Detection of Novel Social Bots by Ensembles of Specialized Classifiers

Malicious actors create inauthentic social media accounts controlled in part by algorithms, known as social bots, to disseminate misinformation and agitate online discussion. While researchers have developed sophisticated methods to detect abuse, novel bots with diverse behaviors evade detection. We show that different types of bots are characterized by different behavioral features. As a result, commonly used supervised learning techniques suffer severe performance deterioration when attempting to detect behaviors not observed in the training data. Moreover, tuning these models to recognize novel bots requires retraining with a significant amount of new annotations, which are expensive to obtain. To address these issues, we propose a new supervised learning method that trains classifiers specialized for each class of bots and combines their decisions through the maximum rule. The ensemble of specialized classifiers (ESC) can better generalize, leading to an average improvement of 56% in F1 score for unseen accounts across datasets. Furthermore, novel bot behaviors are learned with fewer labeled examples during retraining. We are deploying ESC in the newest version of Botometer, a popular tool to detect social bots in the wild.

* 8 pages, 9 figures 
Viaarxiv icon

Recency predicts bursts in the evolution of author citations

Nov 27, 2019
Filipi Nascimento Silva, Aditya Tandon, Diego Raphael Amancio, Alessandro Flammini, Filippo Menczer, Staša Milojević, Santo Fortunato

Figure 1 for Recency predicts bursts in the evolution of author citations
Figure 2 for Recency predicts bursts in the evolution of author citations
Figure 3 for Recency predicts bursts in the evolution of author citations
Figure 4 for Recency predicts bursts in the evolution of author citations

The citations process for scientific papers has been studied extensively. But while the citations accrued by authors are the sum of the citations of their papers, translating the dynamics of citation accumulation from the paper to the author level is not trivial. Here we conduct a systematic study of the evolution of author citations, and in particular their bursty dynamics. We find empirical evidence of a correlation between the number of citations most recently accrued by an author and the number of citations they receive in the future. Using a simple model where the probability for an author to receive new citations depends only on the number of citations collected in the previous 12-24 months, we are able to reproduce both the citation and burst size distributions of authors across multiple decades.

* 12 pages, 7 figures 
Viaarxiv icon

Scalable and Generalizable Social Bot Detection through Data Selection

Nov 20, 2019
Kai-Cheng Yang, Onur Varol, Pik-Mai Hui, Filippo Menczer

Figure 1 for Scalable and Generalizable Social Bot Detection through Data Selection
Figure 2 for Scalable and Generalizable Social Bot Detection through Data Selection
Figure 3 for Scalable and Generalizable Social Bot Detection through Data Selection

Efficient and reliable social bot classification is crucial for detecting information manipulation on social media. Despite rapid development, state-of-the-art bot detection models still face generalization and scalability challenges, which greatly limit their applications. In this paper we propose a framework that uses minimal account metadata, enabling efficient analysis that scales up to handle the full stream of public tweets of Twitter in real time. To ensure model accuracy, we build a rich collection of labeled datasets for training and validation. We deploy a strict validation system so that model performance on unseen datasets is also optimized, in addition to traditional cross-validation. We find that strategically selecting a subset of training data yields better model accuracy and generalization than exhaustively training on all available data. Thanks to the simplicity of the proposed model, its logic can be interpreted to provide insights into social bot characteristics.

* AAAI 2020 
Viaarxiv icon

Finding Streams in Knowledge Graphs to Support Fact Checking

Aug 24, 2017
Prashant Shiralkar, Alessandro Flammini, Filippo Menczer, Giovanni Luca Ciampaglia

Figure 1 for Finding Streams in Knowledge Graphs to Support Fact Checking
Figure 2 for Finding Streams in Knowledge Graphs to Support Fact Checking
Figure 3 for Finding Streams in Knowledge Graphs to Support Fact Checking
Figure 4 for Finding Streams in Knowledge Graphs to Support Fact Checking

The volume and velocity of information that gets generated online limits current journalistic practices to fact-check claims at the same rate. Computational approaches for fact checking may be the key to help mitigate the risks of massive misinformation spread. Such approaches can be designed to not only be scalable and effective at assessing veracity of dubious claims, but also to boost a human fact checker's productivity by surfacing relevant facts and patterns to aid their analysis. To this end, we present a novel, unsupervised network-flow based approach to determine the truthfulness of a statement of fact expressed in the form of a (subject, predicate, object) triple. We view a knowledge graph of background information about real-world entities as a flow network, and knowledge as a fluid, abstract commodity. We show that computational fact checking of such a triple then amounts to finding a "knowledge stream" that emanates from the subject node and flows toward the object node through paths connecting them. Evaluation on a range of real-world and hand-crafted datasets of facts related to entertainment, business, sports, geography and more reveals that this network-flow model can be very effective in discerning true statements from false ones, outperforming existing algorithms on many test cases. Moreover, the model is expressive in its ability to automatically discover several useful path patterns and surface relevant facts that may help a human fact checker corroborate or refute a claim.

* Extended version of the paper in proceedings of ICDM 2017 
Viaarxiv icon

Ultra High-Dimensional Nonlinear Feature Selection for Big Biological Data

Aug 14, 2016
Makoto Yamada, Jiliang Tang, Jose Lugo-Martinez, Ermin Hodzic, Raunak Shrestha, Avishek Saha, Hua Ouyang, Dawei Yin, Hiroshi Mamitsuka, Cenk Sahinalp, Predrag Radivojac, Filippo Menczer, Yi Chang

Figure 1 for Ultra High-Dimensional Nonlinear Feature Selection for Big Biological Data
Figure 2 for Ultra High-Dimensional Nonlinear Feature Selection for Big Biological Data
Figure 3 for Ultra High-Dimensional Nonlinear Feature Selection for Big Biological Data
Figure 4 for Ultra High-Dimensional Nonlinear Feature Selection for Big Biological Data

Machine learning methods are used to discover complex nonlinear relationships in biological and medical data. However, sophisticated learning models are computationally unfeasible for data with millions of features. Here we introduce the first feature selection method for nonlinear learning problems that can scale up to large, ultra-high dimensional biological data. More specifically, we scale up the novel Hilbert-Schmidt Independence Criterion Lasso (HSIC Lasso) to handle millions of features with tens of thousand samples. The proposed method is guaranteed to find an optimal subset of maximally predictive features with minimal redundancy, yielding higher predictive power and improved interpretability. Its effectiveness is demonstrated through applications to classify phenotypes based on module expression in human prostate cancer patients and to detect enzymes among protein structures. We achieve high accuracy with as few as 20 out of one million features --- a dimensionality reduction of 99.998%. Our algorithm can be implemented on commodity cloud computing platforms. The dramatic reduction of features may lead to the ubiquitous deployment of sophisticated prediction models in mobile health care applications.

* Substantially improved version of arXiv:1411.2331 
Viaarxiv icon