Picture for David Noever

David Noever

Michael Pokorny

Hallucinating AI Hijacking Attack: Large Language Models and Malicious Code Recommenders

Add code
Oct 09, 2024
Figure 1 for Hallucinating AI Hijacking Attack: Large Language Models and Malicious Code Recommenders
Figure 2 for Hallucinating AI Hijacking Attack: Large Language Models and Malicious Code Recommenders
Viaarxiv icon

Exploiting Alpha Transparency In Language And Vision-Based AI Systems

Add code
Feb 15, 2024
Viaarxiv icon

Transparency Attacks: How Imperceptible Image Layers Can Fool AI Perception

Add code
Jan 29, 2024
Viaarxiv icon

Satellite Captioning: Large Language Models to Augment Labeling

Add code
Dec 18, 2023
Viaarxiv icon

Evaluating AI Vocational Skills Through Professional Testing

Add code
Dec 17, 2023
Viaarxiv icon

Acoustic Cybersecurity: Exploiting Voice-Activated Systems

Add code
Nov 23, 2023
Viaarxiv icon

Can Large Language Models Find And Fix Vulnerable Software?

Add code
Aug 20, 2023
Viaarxiv icon

AI Text-to-Behavior: A Study In Steerability

Add code
Aug 07, 2023
Viaarxiv icon

Adversarial Agents For Attacking Inaudible Voice Activated Devices

Add code
Jul 25, 2023
Viaarxiv icon

Professional Certification Benchmark Dataset: The First 500 Jobs For Large Language Models

Add code
May 07, 2023
Figure 1 for Professional Certification Benchmark Dataset: The First 500 Jobs For Large Language Models
Figure 2 for Professional Certification Benchmark Dataset: The First 500 Jobs For Large Language Models
Viaarxiv icon