Picture for Zizhan Zheng

Zizhan Zheng

Online Learning with Probing for Sequential User-Centric Selection

Add code
Jul 27, 2025
Viaarxiv icon

Fair Algorithms with Probing for Multi-Agent Multi-Armed Bandits

Add code
Jun 17, 2025
Viaarxiv icon

Meta Stackelberg Game: Robust Federated Learning against Adaptive and Mixed Poisoning Attacks

Add code
Oct 22, 2024
Figure 1 for Meta Stackelberg Game: Robust Federated Learning against Adaptive and Mixed Poisoning Attacks
Figure 2 for Meta Stackelberg Game: Robust Federated Learning against Adaptive and Mixed Poisoning Attacks
Figure 3 for Meta Stackelberg Game: Robust Federated Learning against Adaptive and Mixed Poisoning Attacks
Figure 4 for Meta Stackelberg Game: Robust Federated Learning against Adaptive and Mixed Poisoning Attacks
Viaarxiv icon

Belief-Enriched Pessimistic Q-Learning against Adversarial State Perturbations

Add code
Mar 06, 2024
Viaarxiv icon

Enhancing LLM Safety via Constrained Direct Preference Optimization

Add code
Mar 04, 2024
Figure 1 for Enhancing LLM Safety via Constrained Direct Preference Optimization
Figure 2 for Enhancing LLM Safety via Constrained Direct Preference Optimization
Figure 3 for Enhancing LLM Safety via Constrained Direct Preference Optimization
Figure 4 for Enhancing LLM Safety via Constrained Direct Preference Optimization
Viaarxiv icon

A First Order Meta Stackelberg Method for Robust Federated Learning

Add code
Jul 16, 2023
Viaarxiv icon

Learning to Backdoor Federated Learning

Add code
Mar 06, 2023
Viaarxiv icon

Online Learning for Adaptive Probing and Scheduling in Dense WLANs

Add code
Dec 27, 2022
Viaarxiv icon

Pandering in a Flexible Representative Democracy

Add code
Nov 18, 2022
Figure 1 for Pandering in a Flexible Representative Democracy
Figure 2 for Pandering in a Flexible Representative Democracy
Figure 3 for Pandering in a Flexible Representative Democracy
Viaarxiv icon

Joint AP Probing and Scheduling: A Contextual Bandit Approach

Add code
Aug 13, 2021
Figure 1 for Joint AP Probing and Scheduling: A Contextual Bandit Approach
Viaarxiv icon