Picture for Ziqi Yang

Ziqi Yang

Enhancing Security in Multi-Robot Systems through Co-Observation Planning, Reachability Analysis, and Network Flow

Add code
Mar 20, 2024
Figure 1 for Enhancing Security in Multi-Robot Systems through Co-Observation Planning, Reachability Analysis, and Network Flow
Figure 2 for Enhancing Security in Multi-Robot Systems through Co-Observation Planning, Reachability Analysis, and Network Flow
Figure 3 for Enhancing Security in Multi-Robot Systems through Co-Observation Planning, Reachability Analysis, and Network Flow
Figure 4 for Enhancing Security in Multi-Robot Systems through Co-Observation Planning, Reachability Analysis, and Network Flow
Viaarxiv icon

Towards Fair Graph Federated Learning via Incentive Mechanisms

Add code
Dec 20, 2023
Viaarxiv icon

Talk2Care: Facilitating Asynchronous Patient-Provider Communication with Large-Language-Model

Add code
Sep 22, 2023
Figure 1 for Talk2Care: Facilitating Asynchronous Patient-Provider Communication with Large-Language-Model
Figure 2 for Talk2Care: Facilitating Asynchronous Patient-Provider Communication with Large-Language-Model
Figure 3 for Talk2Care: Facilitating Asynchronous Patient-Provider Communication with Large-Language-Model
Figure 4 for Talk2Care: Facilitating Asynchronous Patient-Provider Communication with Large-Language-Model
Viaarxiv icon

Asymmetric Co-Training with Explainable Cell Graph Ensembling for Histopathological Image Classification

Add code
Aug 24, 2023
Figure 1 for Asymmetric Co-Training with Explainable Cell Graph Ensembling for Histopathological Image Classification
Figure 2 for Asymmetric Co-Training with Explainable Cell Graph Ensembling for Histopathological Image Classification
Figure 3 for Asymmetric Co-Training with Explainable Cell Graph Ensembling for Histopathological Image Classification
Figure 4 for Asymmetric Co-Training with Explainable Cell Graph Ensembling for Histopathological Image Classification
Viaarxiv icon

FLGo: A Fully Customizable Federated Learning Platform

Add code
Jun 21, 2023
Figure 1 for FLGo: A Fully Customizable Federated Learning Platform
Figure 2 for FLGo: A Fully Customizable Federated Learning Platform
Figure 3 for FLGo: A Fully Customizable Federated Learning Platform
Figure 4 for FLGo: A Fully Customizable Federated Learning Platform
Viaarxiv icon

Purifier: Defending Data Inference Attacks via Transforming Confidence Scores

Add code
Dec 01, 2022
Figure 1 for Purifier: Defending Data Inference Attacks via Transforming Confidence Scores
Figure 2 for Purifier: Defending Data Inference Attacks via Transforming Confidence Scores
Figure 3 for Purifier: Defending Data Inference Attacks via Transforming Confidence Scores
Figure 4 for Purifier: Defending Data Inference Attacks via Transforming Confidence Scores
Viaarxiv icon

Defending Model Inversion and Membership Inference Attacks via Prediction Purification

Add code
May 08, 2020
Figure 1 for Defending Model Inversion and Membership Inference Attacks via Prediction Purification
Figure 2 for Defending Model Inversion and Membership Inference Attacks via Prediction Purification
Figure 3 for Defending Model Inversion and Membership Inference Attacks via Prediction Purification
Figure 4 for Defending Model Inversion and Membership Inference Attacks via Prediction Purification
Viaarxiv icon

Statistical Outlier Identification in Multi-robot Visual SLAM using Expectation Maximization

Add code
Feb 07, 2020
Figure 1 for Statistical Outlier Identification in Multi-robot Visual SLAM using Expectation Maximization
Figure 2 for Statistical Outlier Identification in Multi-robot Visual SLAM using Expectation Maximization
Figure 3 for Statistical Outlier Identification in Multi-robot Visual SLAM using Expectation Maximization
Figure 4 for Statistical Outlier Identification in Multi-robot Visual SLAM using Expectation Maximization
Viaarxiv icon

Effectiveness of Distillation Attack and Countermeasure on Neural Network Watermarking

Add code
Jun 14, 2019
Figure 1 for Effectiveness of Distillation Attack and Countermeasure on Neural Network Watermarking
Figure 2 for Effectiveness of Distillation Attack and Countermeasure on Neural Network Watermarking
Figure 3 for Effectiveness of Distillation Attack and Countermeasure on Neural Network Watermarking
Figure 4 for Effectiveness of Distillation Attack and Countermeasure on Neural Network Watermarking
Viaarxiv icon

Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment

Add code
Feb 22, 2019
Figure 1 for Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment
Figure 2 for Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment
Figure 3 for Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment
Figure 4 for Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment
Viaarxiv icon