Picture for Mehul Motani

Mehul Motani

Teaching the Teacher: Improving Neural Network Distillability for Symbolic Regression via Jacobian Regularization

Add code
Jul 30, 2025
Viaarxiv icon

TaskGen: A Task-Based, Memory-Infused Agentic Framework using StrictJSON

Add code
Jul 22, 2024
Viaarxiv icon

Large Language Model as a System of Multiple Expert Agents: An Approach to solve the Abstraction and Reasoning Corpus Challenge

Add code
Oct 08, 2023
Viaarxiv icon

Local Intrinsic Dimensional Entropy

Add code
Apr 06, 2023
Viaarxiv icon

Learning, Fast and Slow: A Goal-Directed Memory-Based Approach for Dynamic Environments

Add code
Feb 01, 2023
Viaarxiv icon

Improving Mutual Information based Feature Selection by Boosting Unique Relevance

Add code
Dec 17, 2022
Viaarxiv icon

AP: Selective Activation for De-sparsifying Pruned Neural Networks

Add code
Dec 09, 2022
Viaarxiv icon

Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks

Add code
Dec 09, 2022
Figure 1 for Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
Figure 2 for Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
Figure 3 for Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
Figure 4 for Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
Viaarxiv icon

Towards Better Long-range Time Series Forecasting using Generative Forecasting

Add code
Dec 09, 2022
Viaarxiv icon

DropNet: Reducing Neural Network Complexity via Iterative Pruning

Add code
Jul 14, 2022
Figure 1 for DropNet: Reducing Neural Network Complexity via Iterative Pruning
Figure 2 for DropNet: Reducing Neural Network Complexity via Iterative Pruning
Figure 3 for DropNet: Reducing Neural Network Complexity via Iterative Pruning
Figure 4 for DropNet: Reducing Neural Network Complexity via Iterative Pruning
Viaarxiv icon