Picture for Giovanni Iacca

Giovanni Iacca

Frequency Matters: Fast Model-Agnostic Data Curation for Pruning and Quantization

Add code
Mar 17, 2026
Viaarxiv icon

Generative AI collective behavior needs an interactionist paradigm

Add code
Jan 15, 2026
Viaarxiv icon

Neural Brain: A Neuroscience-inspired Framework for Embodied Agents

Add code
May 14, 2025
Viaarxiv icon

Nature's Insight: A Novel Framework and Comprehensive Analysis of Agentic Reasoning Through the Lens of Neuroscience

Add code
May 07, 2025
Viaarxiv icon

Rethinking Few-Shot Adaptation of Vision-Language Models in Two Stages

Add code
Mar 14, 2025
Figure 1 for Rethinking Few-Shot Adaptation of Vision-Language Models in Two Stages
Figure 2 for Rethinking Few-Shot Adaptation of Vision-Language Models in Two Stages
Figure 3 for Rethinking Few-Shot Adaptation of Vision-Language Models in Two Stages
Figure 4 for Rethinking Few-Shot Adaptation of Vision-Language Models in Two Stages
Viaarxiv icon

2SSP: A Two-Stage Framework for Structured Pruning of LLMs

Add code
Jan 29, 2025
Figure 1 for 2SSP: A Two-Stage Framework for Structured Pruning of LLMs
Figure 2 for 2SSP: A Two-Stage Framework for Structured Pruning of LLMs
Figure 3 for 2SSP: A Two-Stage Framework for Structured Pruning of LLMs
Figure 4 for 2SSP: A Two-Stage Framework for Structured Pruning of LLMs
Viaarxiv icon

SMOSE: Sparse Mixture of Shallow Experts for Interpretable Reinforcement Learning in Continuous Control Tasks

Add code
Dec 17, 2024
Figure 1 for SMOSE: Sparse Mixture of Shallow Experts for Interpretable Reinforcement Learning in Continuous Control Tasks
Figure 2 for SMOSE: Sparse Mixture of Shallow Experts for Interpretable Reinforcement Learning in Continuous Control Tasks
Figure 3 for SMOSE: Sparse Mixture of Shallow Experts for Interpretable Reinforcement Learning in Continuous Control Tasks
Figure 4 for SMOSE: Sparse Mixture of Shallow Experts for Interpretable Reinforcement Learning in Continuous Control Tasks
Viaarxiv icon

Zeroth-Order Adaptive Neuron Alignment Based Pruning without Re-Training

Add code
Nov 11, 2024
Figure 1 for Zeroth-Order Adaptive Neuron Alignment Based Pruning without Re-Training
Figure 2 for Zeroth-Order Adaptive Neuron Alignment Based Pruning without Re-Training
Figure 3 for Zeroth-Order Adaptive Neuron Alignment Based Pruning without Re-Training
Figure 4 for Zeroth-Order Adaptive Neuron Alignment Based Pruning without Re-Training
Viaarxiv icon

An investigation on the use of Large Language Models for hyperparameter tuning in Evolutionary Algorithms

Add code
Aug 05, 2024
Figure 1 for An investigation on the use of Large Language Models for hyperparameter tuning in Evolutionary Algorithms
Figure 2 for An investigation on the use of Large Language Models for hyperparameter tuning in Evolutionary Algorithms
Figure 3 for An investigation on the use of Large Language Models for hyperparameter tuning in Evolutionary Algorithms
Figure 4 for An investigation on the use of Large Language Models for hyperparameter tuning in Evolutionary Algorithms
Viaarxiv icon

The Effect of Training Schedules on Morphological Robustness and Generalization

Add code
Jul 19, 2024
Figure 1 for The Effect of Training Schedules on Morphological Robustness and Generalization
Figure 2 for The Effect of Training Schedules on Morphological Robustness and Generalization
Figure 3 for The Effect of Training Schedules on Morphological Robustness and Generalization
Figure 4 for The Effect of Training Schedules on Morphological Robustness and Generalization
Viaarxiv icon