Picture for Rajkumar Buyya

Rajkumar Buyya

Agentic Artificial Intelligence (AI): Architectures, Taxonomies, and Evaluation of Large Language Model Agents

Add code
Jan 18, 2026
Viaarxiv icon

Evidential Trust-Aware Model Personalization in Decentralized Federated Learning for Wearable IoT

Add code
Dec 22, 2025
Viaarxiv icon

Quantum Machine Learning for Cybersecurity: A Taxonomy and Future Directions

Add code
Dec 17, 2025
Viaarxiv icon

Quantum Artificial Intelligence (QAI): Foundations, Architectural Elements, and Future Directions

Add code
Nov 13, 2025
Viaarxiv icon

AirFed: Federated Graph-Enhanced Multi-Agent Reinforcement Learning for Multi-UAV Cooperative Mobile Edge Computing

Add code
Oct 27, 2025
Viaarxiv icon

Incentive-Based Federated Learning

Add code
Oct 16, 2025
Figure 1 for Incentive-Based Federated Learning
Figure 2 for Incentive-Based Federated Learning
Figure 3 for Incentive-Based Federated Learning
Figure 4 for Incentive-Based Federated Learning
Viaarxiv icon

SketchGuard: Scaling Byzantine-Robust Decentralized Federated Learning via Sketch-Based Screening

Add code
Oct 09, 2025
Viaarxiv icon

Network Structures as an Attack Surface: Topology-Based Privacy Leakage in Federated Learning

Add code
Jun 24, 2025
Viaarxiv icon

Input-Based Ensemble-Learning Method for Dynamic Memory Configuration of Serverless Computing Functions

Add code
Nov 12, 2024
Figure 1 for Input-Based Ensemble-Learning Method for Dynamic Memory Configuration of Serverless Computing Functions
Figure 2 for Input-Based Ensemble-Learning Method for Dynamic Memory Configuration of Serverless Computing Functions
Figure 3 for Input-Based Ensemble-Learning Method for Dynamic Memory Configuration of Serverless Computing Functions
Figure 4 for Input-Based Ensemble-Learning Method for Dynamic Memory Configuration of Serverless Computing Functions
Viaarxiv icon

Hermes: Memory-Efficient Pipeline Inference for Large Models on Edge Devices

Add code
Sep 09, 2024
Figure 1 for Hermes: Memory-Efficient Pipeline Inference for Large Models on Edge Devices
Figure 2 for Hermes: Memory-Efficient Pipeline Inference for Large Models on Edge Devices
Figure 3 for Hermes: Memory-Efficient Pipeline Inference for Large Models on Edge Devices
Figure 4 for Hermes: Memory-Efficient Pipeline Inference for Large Models on Edge Devices
Viaarxiv icon