Picture for Mehul Motani

Mehul Motani

Teaching the Teacher: Improving Neural Network Distillability for Symbolic Regression via Jacobian Regularization

Add code
Jul 30, 2025
Viaarxiv icon

TaskGen: A Task-Based, Memory-Infused Agentic Framework using StrictJSON

Add code
Jul 22, 2024
Viaarxiv icon

Large Language Model as a System of Multiple Expert Agents: An Approach to solve the Abstraction and Reasoning Corpus Challenge

Add code
Oct 08, 2023
Viaarxiv icon

Local Intrinsic Dimensional Entropy

Add code
Apr 06, 2023
Figure 1 for Local Intrinsic Dimensional Entropy
Figure 2 for Local Intrinsic Dimensional Entropy
Viaarxiv icon

Learning, Fast and Slow: A Goal-Directed Memory-Based Approach for Dynamic Environments

Add code
Feb 01, 2023
Figure 1 for Learning, Fast and Slow: A Goal-Directed Memory-Based Approach for Dynamic Environments
Figure 2 for Learning, Fast and Slow: A Goal-Directed Memory-Based Approach for Dynamic Environments
Figure 3 for Learning, Fast and Slow: A Goal-Directed Memory-Based Approach for Dynamic Environments
Figure 4 for Learning, Fast and Slow: A Goal-Directed Memory-Based Approach for Dynamic Environments
Viaarxiv icon

Improving Mutual Information based Feature Selection by Boosting Unique Relevance

Add code
Dec 17, 2022
Figure 1 for Improving Mutual Information based Feature Selection by Boosting Unique Relevance
Figure 2 for Improving Mutual Information based Feature Selection by Boosting Unique Relevance
Figure 3 for Improving Mutual Information based Feature Selection by Boosting Unique Relevance
Figure 4 for Improving Mutual Information based Feature Selection by Boosting Unique Relevance
Viaarxiv icon

Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks

Add code
Dec 09, 2022
Figure 1 for Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
Figure 2 for Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
Figure 3 for Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
Figure 4 for Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
Viaarxiv icon

Towards Better Long-range Time Series Forecasting using Generative Forecasting

Add code
Dec 09, 2022
Figure 1 for Towards Better Long-range Time Series Forecasting using Generative Forecasting
Figure 2 for Towards Better Long-range Time Series Forecasting using Generative Forecasting
Figure 3 for Towards Better Long-range Time Series Forecasting using Generative Forecasting
Figure 4 for Towards Better Long-range Time Series Forecasting using Generative Forecasting
Viaarxiv icon

AP: Selective Activation for De-sparsifying Pruned Neural Networks

Add code
Dec 09, 2022
Figure 1 for AP: Selective Activation for De-sparsifying Pruned Neural Networks
Figure 2 for AP: Selective Activation for De-sparsifying Pruned Neural Networks
Figure 3 for AP: Selective Activation for De-sparsifying Pruned Neural Networks
Figure 4 for AP: Selective Activation for De-sparsifying Pruned Neural Networks
Viaarxiv icon

DropNet: Reducing Neural Network Complexity via Iterative Pruning

Add code
Jul 14, 2022
Figure 1 for DropNet: Reducing Neural Network Complexity via Iterative Pruning
Figure 2 for DropNet: Reducing Neural Network Complexity via Iterative Pruning
Figure 3 for DropNet: Reducing Neural Network Complexity via Iterative Pruning
Figure 4 for DropNet: Reducing Neural Network Complexity via Iterative Pruning
Viaarxiv icon