Alert button
Picture for Philip Feldman

Philip Feldman

Alert button

Trapping LLM Hallucinations Using Tagged Context Prompts

Add code
Bookmark button
Alert button
Jun 09, 2023
Philip Feldman, James R. Foulds, Shimei Pan

Figure 1 for Trapping LLM Hallucinations Using Tagged Context Prompts
Figure 2 for Trapping LLM Hallucinations Using Tagged Context Prompts
Figure 3 for Trapping LLM Hallucinations Using Tagged Context Prompts
Figure 4 for Trapping LLM Hallucinations Using Tagged Context Prompts
Viaarxiv icon

Down the Rabbit Hole: Detecting Online Extremism, Radicalisation, and Politicised Hate Speech

Add code
Bookmark button
Alert button
Jan 27, 2023
Jarod Govers, Philip Feldman, Aaron Dant, Panos Patros

Figure 1 for Down the Rabbit Hole: Detecting Online Extremism, Radicalisation, and Politicised Hate Speech
Figure 2 for Down the Rabbit Hole: Detecting Online Extremism, Radicalisation, and Politicised Hate Speech
Figure 3 for Down the Rabbit Hole: Detecting Online Extremism, Radicalisation, and Politicised Hate Speech
Figure 4 for Down the Rabbit Hole: Detecting Online Extremism, Radicalisation, and Politicised Hate Speech
Viaarxiv icon

Polling Latent Opinions: A Method for Computational Sociolinguistics Using Transformer Language Models

Add code
Bookmark button
Alert button
Apr 19, 2022
Philip Feldman, Aaron Dant, James R. Foulds, Shemei Pan

Figure 1 for Polling Latent Opinions: A Method for Computational Sociolinguistics Using Transformer Language Models
Figure 2 for Polling Latent Opinions: A Method for Computational Sociolinguistics Using Transformer Language Models
Figure 3 for Polling Latent Opinions: A Method for Computational Sociolinguistics Using Transformer Language Models
Figure 4 for Polling Latent Opinions: A Method for Computational Sociolinguistics Using Transformer Language Models
Viaarxiv icon

Ethics, Rules of Engagement, and AI: Neural Narrative Mapping Using Large Transformer Language Models

Add code
Bookmark button
Alert button
Feb 05, 2022
Philip Feldman, Aaron Dant, David Rosenbluth

Figure 1 for Ethics, Rules of Engagement, and AI: Neural Narrative Mapping Using Large Transformer Language Models
Figure 2 for Ethics, Rules of Engagement, and AI: Neural Narrative Mapping Using Large Transformer Language Models
Figure 3 for Ethics, Rules of Engagement, and AI: Neural Narrative Mapping Using Large Transformer Language Models
Figure 4 for Ethics, Rules of Engagement, and AI: Neural Narrative Mapping Using Large Transformer Language Models
Viaarxiv icon

Analyzing COVID-19 Tweets with Transformer-based Language Models

Add code
Bookmark button
Alert button
May 06, 2021
Philip Feldman, Sim Tiwari, Charissa S. L. Cheah, James R. Foulds, Shimei Pan

Figure 1 for Analyzing COVID-19 Tweets with Transformer-based Language Models
Figure 2 for Analyzing COVID-19 Tweets with Transformer-based Language Models
Figure 3 for Analyzing COVID-19 Tweets with Transformer-based Language Models
Figure 4 for Analyzing COVID-19 Tweets with Transformer-based Language Models
Viaarxiv icon

Training robust anomaly detection using ML-Enhanced simulations

Add code
Bookmark button
Alert button
Aug 27, 2020
Philip Feldman

Figure 1 for Training robust anomaly detection using ML-Enhanced simulations
Figure 2 for Training robust anomaly detection using ML-Enhanced simulations
Figure 3 for Training robust anomaly detection using ML-Enhanced simulations
Figure 4 for Training robust anomaly detection using ML-Enhanced simulations
Viaarxiv icon

Navigating Language Models with Synthetic Agents

Add code
Bookmark button
Alert button
Aug 24, 2020
Philip Feldman

Figure 1 for Navigating Language Models with Synthetic Agents
Figure 2 for Navigating Language Models with Synthetic Agents
Figure 3 for Navigating Language Models with Synthetic Agents
Figure 4 for Navigating Language Models with Synthetic Agents
Viaarxiv icon