Picture for Jin-Hee Cho

Jin-Hee Cho

Virginia Tech

DASH: Deception-Augmented Shared Mental Model for a Human-Machine Teaming System

Add code
Dec 21, 2025
Viaarxiv icon

MURIM: Multidimensional Reputation-based Incentive Mechanism for Federated Learning

Add code
Dec 15, 2025
Viaarxiv icon

PRIVEE: Privacy-Preserving Vertical Federated Learning Against Feature Inference Attacks

Add code
Dec 14, 2025
Viaarxiv icon

Sustainable Smart Farm Networks: Enhancing Resilience and Efficiency with Decision Theory-Guided Deep Reinforcement Learning

Add code
May 06, 2025
Figure 1 for Sustainable Smart Farm Networks: Enhancing Resilience and Efficiency with Decision Theory-Guided Deep Reinforcement Learning
Figure 2 for Sustainable Smart Farm Networks: Enhancing Resilience and Efficiency with Decision Theory-Guided Deep Reinforcement Learning
Figure 3 for Sustainable Smart Farm Networks: Enhancing Resilience and Efficiency with Decision Theory-Guided Deep Reinforcement Learning
Figure 4 for Sustainable Smart Farm Networks: Enhancing Resilience and Efficiency with Decision Theory-Guided Deep Reinforcement Learning
Viaarxiv icon

OPUS-VFL: Incentivizing Optimal Privacy-Utility Tradeoffs in Vertical Federated Learning

Add code
Apr 22, 2025
Figure 1 for OPUS-VFL: Incentivizing Optimal Privacy-Utility Tradeoffs in Vertical Federated Learning
Figure 2 for OPUS-VFL: Incentivizing Optimal Privacy-Utility Tradeoffs in Vertical Federated Learning
Figure 3 for OPUS-VFL: Incentivizing Optimal Privacy-Utility Tradeoffs in Vertical Federated Learning
Figure 4 for OPUS-VFL: Incentivizing Optimal Privacy-Utility Tradeoffs in Vertical Federated Learning
Viaarxiv icon

LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models

Add code
Apr 14, 2025
Figure 1 for LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models
Figure 2 for LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models
Figure 3 for LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models
Figure 4 for LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models
Viaarxiv icon

Empirical Analysis of Privacy-Fairness-Accuracy Trade-offs in Federated Learning: A Step Towards Responsible AI

Add code
Mar 20, 2025
Viaarxiv icon

RESFL: An Uncertainty-Aware Framework for Responsible Federated Learning by Balancing Privacy, Fairness and Utility in Autonomous Vehicles

Add code
Mar 20, 2025
Viaarxiv icon

Advancing Human-Machine Teaming: Concepts, Challenges, and Applications

Add code
Mar 16, 2025
Viaarxiv icon

Exposing LLM Vulnerabilities: Adversarial Scam Detection and Performance

Add code
Dec 01, 2024
Figure 1 for Exposing LLM Vulnerabilities: Adversarial Scam Detection and Performance
Figure 2 for Exposing LLM Vulnerabilities: Adversarial Scam Detection and Performance
Figure 3 for Exposing LLM Vulnerabilities: Adversarial Scam Detection and Performance
Figure 4 for Exposing LLM Vulnerabilities: Adversarial Scam Detection and Performance
Viaarxiv icon