Alert button
Picture for Anastasios Kyrillidis

Anastasios Kyrillidis

Alert button

On the Error-Propagation of Inexact Deflation for Principal Component Analysis

Oct 06, 2023
Fangshuo Liao, Junhyung Lyle Kim, Cruz Barnum, Anastasios Kyrillidis

Viaarxiv icon

Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation

Oct 05, 2023
Chen Dun, Mirian Hipolito Garcia, Guoqing Zheng, Ahmed Hassan Awadallah, Anastasios Kyrillidis, Robert Sim

Figure 1 for Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation
Figure 2 for Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation
Figure 3 for Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation
Figure 4 for Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation
Viaarxiv icon

CrysFormer: Protein Structure Prediction via 3d Patterson Maps and Partial Structure Attention

Oct 05, 2023
Chen Dun, Qiutai Pan, Shikai Jin, Ria Stevens, Mitchell D. Miller, George N. Phillips, Jr., Anastasios Kyrillidis

Viaarxiv icon

Stochastic Implicit Neural Signed Distance Functions for Safe Motion Planning under Sensing Uncertainty

Sep 28, 2023
Carlos Quintero-Peña, Wil Thomason, Zachary Kingston, Anastasios Kyrillidis, Lydia E. Kavraki

Viaarxiv icon

Fast FixMatch: Faster Semi-Supervised Learning with Curriculum Batch Size

Sep 07, 2023
John Chen, Chen Dun, Anastasios Kyrillidis

Viaarxiv icon

Federated Learning Over Images: Vertical Decompositions and Pre-Trained Backbones Are Difficult to Beat

Sep 06, 2023
Erdong Hu, Yuxin Tang, Anastasios Kyrillidis, Chris Jermaine

Figure 1 for Federated Learning Over Images: Vertical Decompositions and Pre-Trained Backbones Are Difficult to Beat
Figure 2 for Federated Learning Over Images: Vertical Decompositions and Pre-Trained Backbones Are Difficult to Beat
Figure 3 for Federated Learning Over Images: Vertical Decompositions and Pre-Trained Backbones Are Difficult to Beat
Figure 4 for Federated Learning Over Images: Vertical Decompositions and Pre-Trained Backbones Are Difficult to Beat
Viaarxiv icon

Adaptive Federated Learning with Auto-Tuned Clients

Jun 19, 2023
Junhyung Lyle Kim, Mohammad Taha Toghani, César A. Uribe, Anastasios Kyrillidis

Figure 1 for Adaptive Federated Learning with Auto-Tuned Clients
Figure 2 for Adaptive Federated Learning with Auto-Tuned Clients
Figure 3 for Adaptive Federated Learning with Auto-Tuned Clients
Figure 4 for Adaptive Federated Learning with Auto-Tuned Clients
Viaarxiv icon

Fed-ZERO: Efficient Zero-shot Personalization with Federated Mixture of Experts

Jun 14, 2023
Chen Dun, Mirian Hipolito Garcia, Guoqing Zheng, Ahmed Hassan Awadallah, Robert Sim, Anastasios Kyrillidis, Dimitrios Dimitriadis

Figure 1 for Fed-ZERO: Efficient Zero-shot Personalization with Federated Mixture of Experts
Figure 2 for Fed-ZERO: Efficient Zero-shot Personalization with Federated Mixture of Experts
Figure 3 for Fed-ZERO: Efficient Zero-shot Personalization with Federated Mixture of Experts
Figure 4 for Fed-ZERO: Efficient Zero-shot Personalization with Federated Mixture of Experts
Viaarxiv icon

Accelerated Convergence of Nesterov's Momentum for Deep Neural Networks under Partial Strong Convexity

Jun 13, 2023
Fangshuo Liao, Anastasios Kyrillidis

Figure 1 for Accelerated Convergence of Nesterov's Momentum for Deep Neural Networks under Partial Strong Convexity
Viaarxiv icon

Scissorhands: Exploiting the Persistence of Importance Hypothesis for LLM KV Cache Compression at Test Time

May 26, 2023
Zichang Liu, Aditya Desai, Fangshuo Liao, Weitao Wang, Victor Xie, Zhaozhuo Xu, Anastasios Kyrillidis, Anshumali Shrivastava

Figure 1 for Scissorhands: Exploiting the Persistence of Importance Hypothesis for LLM KV Cache Compression at Test Time
Figure 2 for Scissorhands: Exploiting the Persistence of Importance Hypothesis for LLM KV Cache Compression at Test Time
Figure 3 for Scissorhands: Exploiting the Persistence of Importance Hypothesis for LLM KV Cache Compression at Test Time
Figure 4 for Scissorhands: Exploiting the Persistence of Importance Hypothesis for LLM KV Cache Compression at Test Time
Viaarxiv icon