Picture for Iraklis Anagnostopoulos

Iraklis Anagnostopoulos

Sponge Attacks on Sensing AI: Energy-Latency Vulnerabilities and Defense via Model Pruning

Add code
May 09, 2025
Viaarxiv icon

CarbonCall: Sustainability-Aware Function Calling for Large Language Models on Edge Devices

Add code
Apr 29, 2025
Viaarxiv icon

Carbon-Efficient 3D DNN Acceleration: Optimizing Performance and Sustainability

Add code
Apr 14, 2025
Viaarxiv icon

Ecomap: Sustainability-Driven Optimization of Multi-Tenant DNN Execution on Edge Servers

Add code
Mar 06, 2025
Viaarxiv icon

Multi-Agent Geospatial Copilots for Remote Sensing Workflows

Add code
Jan 27, 2025
Figure 1 for Multi-Agent Geospatial Copilots for Remote Sensing Workflows
Figure 2 for Multi-Agent Geospatial Copilots for Remote Sensing Workflows
Figure 3 for Multi-Agent Geospatial Copilots for Remote Sensing Workflows
Figure 4 for Multi-Agent Geospatial Copilots for Remote Sensing Workflows
Viaarxiv icon

Leveraging Highly Approximated Multipliers in DNN Inference

Add code
Dec 21, 2024
Figure 1 for Leveraging Highly Approximated Multipliers in DNN Inference
Figure 2 for Leveraging Highly Approximated Multipliers in DNN Inference
Figure 3 for Leveraging Highly Approximated Multipliers in DNN Inference
Figure 4 for Leveraging Highly Approximated Multipliers in DNN Inference
Viaarxiv icon

RankMap: Priority-Aware Multi-DNN Manager for Heterogeneous Embedded Devices

Add code
Nov 26, 2024
Figure 1 for RankMap: Priority-Aware Multi-DNN Manager for Heterogeneous Embedded Devices
Figure 2 for RankMap: Priority-Aware Multi-DNN Manager for Heterogeneous Embedded Devices
Figure 3 for RankMap: Priority-Aware Multi-DNN Manager for Heterogeneous Embedded Devices
Figure 4 for RankMap: Priority-Aware Multi-DNN Manager for Heterogeneous Embedded Devices
Viaarxiv icon

Less is More: Optimizing Function Calling for LLM Execution on Edge Devices

Add code
Nov 23, 2024
Viaarxiv icon

LLM-dCache: Improving Tool-Augmented LLMs with GPT-Driven Localized Data Caching

Add code
Jun 10, 2024
Figure 1 for LLM-dCache: Improving Tool-Augmented LLMs with GPT-Driven Localized Data Caching
Figure 2 for LLM-dCache: Improving Tool-Augmented LLMs with GPT-Driven Localized Data Caching
Figure 3 for LLM-dCache: Improving Tool-Augmented LLMs with GPT-Driven Localized Data Caching
Viaarxiv icon

Hardware-Aware DNN Compression via Diverse Pruning and Mixed-Precision Quantization

Add code
Dec 23, 2023
Figure 1 for Hardware-Aware DNN Compression via Diverse Pruning and Mixed-Precision Quantization
Figure 2 for Hardware-Aware DNN Compression via Diverse Pruning and Mixed-Precision Quantization
Figure 3 for Hardware-Aware DNN Compression via Diverse Pruning and Mixed-Precision Quantization
Figure 4 for Hardware-Aware DNN Compression via Diverse Pruning and Mixed-Precision Quantization
Viaarxiv icon