Alert button
Picture for Mehul Motani

Mehul Motani

Alert button

Large Language Model (LLM) as a System of Multiple Expert Agents: An Approach to solve the Abstraction and Reasoning Corpus (ARC) Challenge

Add code
Bookmark button
Alert button
Oct 08, 2023
John Chong Min Tan, Mehul Motani

Viaarxiv icon

Local Intrinsic Dimensional Entropy

Add code
Bookmark button
Alert button
Apr 06, 2023
Rohan Ghosh, Mehul Motani

Figure 1 for Local Intrinsic Dimensional Entropy
Figure 2 for Local Intrinsic Dimensional Entropy
Viaarxiv icon

Learning, Fast and Slow: A Goal-Directed Memory-Based Approach for Dynamic Environments

Add code
Bookmark button
Alert button
Feb 01, 2023
John Chong Min Tan, Mehul Motani

Figure 1 for Learning, Fast and Slow: A Goal-Directed Memory-Based Approach for Dynamic Environments
Figure 2 for Learning, Fast and Slow: A Goal-Directed Memory-Based Approach for Dynamic Environments
Figure 3 for Learning, Fast and Slow: A Goal-Directed Memory-Based Approach for Dynamic Environments
Figure 4 for Learning, Fast and Slow: A Goal-Directed Memory-Based Approach for Dynamic Environments
Viaarxiv icon

Improving Mutual Information based Feature Selection by Boosting Unique Relevance

Add code
Bookmark button
Alert button
Dec 17, 2022
Shiyu Liu, Mehul Motani

Figure 1 for Improving Mutual Information based Feature Selection by Boosting Unique Relevance
Figure 2 for Improving Mutual Information based Feature Selection by Boosting Unique Relevance
Figure 3 for Improving Mutual Information based Feature Selection by Boosting Unique Relevance
Figure 4 for Improving Mutual Information based Feature Selection by Boosting Unique Relevance
Viaarxiv icon

AP: Selective Activation for De-sparsifying Pruned Neural Networks

Add code
Bookmark button
Alert button
Dec 09, 2022
Shiyu Liu, Rohan Ghosh, Dylan Tan, Mehul Motani

Figure 1 for AP: Selective Activation for De-sparsifying Pruned Neural Networks
Figure 2 for AP: Selective Activation for De-sparsifying Pruned Neural Networks
Figure 3 for AP: Selective Activation for De-sparsifying Pruned Neural Networks
Figure 4 for AP: Selective Activation for De-sparsifying Pruned Neural Networks
Viaarxiv icon

Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks

Add code
Bookmark button
Alert button
Dec 09, 2022
Shiyu Liu, Rohan Ghosh, John Tan Chong Min, Mehul Motani

Figure 1 for Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
Figure 2 for Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
Figure 3 for Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
Figure 4 for Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
Viaarxiv icon

Towards Better Long-range Time Series Forecasting using Generative Forecasting

Add code
Bookmark button
Alert button
Dec 09, 2022
Shiyu Liu, Rohan Ghosh, Mehul Motani

Figure 1 for Towards Better Long-range Time Series Forecasting using Generative Forecasting
Figure 2 for Towards Better Long-range Time Series Forecasting using Generative Forecasting
Figure 3 for Towards Better Long-range Time Series Forecasting using Generative Forecasting
Figure 4 for Towards Better Long-range Time Series Forecasting using Generative Forecasting
Viaarxiv icon

DropNet: Reducing Neural Network Complexity via Iterative Pruning

Add code
Bookmark button
Alert button
Jul 14, 2022
John Tan Chong Min, Mehul Motani

Figure 1 for DropNet: Reducing Neural Network Complexity via Iterative Pruning
Figure 2 for DropNet: Reducing Neural Network Complexity via Iterative Pruning
Figure 3 for DropNet: Reducing Neural Network Complexity via Iterative Pruning
Figure 4 for DropNet: Reducing Neural Network Complexity via Iterative Pruning
Viaarxiv icon