Alert button
Picture for Aya Abdelsalam Ismail

Aya Abdelsalam Ismail

Alert button

Interpretable Mixture of Experts for Structured Data

Add code
Bookmark button
Alert button
Jun 05, 2022
Aya Abdelsalam Ismail, Sercan Ö. Arik, Jinsung Yoon, Ankur Taly, Soheil Feizi, Tomas Pfister

Figure 1 for Interpretable Mixture of Experts for Structured Data
Figure 2 for Interpretable Mixture of Experts for Structured Data
Figure 3 for Interpretable Mixture of Experts for Structured Data
Figure 4 for Interpretable Mixture of Experts for Structured Data
Viaarxiv icon

Improving Deep Learning Interpretability by Saliency Guided Training

Add code
Bookmark button
Alert button
Nov 29, 2021
Aya Abdelsalam Ismail, Héctor Corrada Bravo, Soheil Feizi

Figure 1 for Improving Deep Learning Interpretability by Saliency Guided Training
Figure 2 for Improving Deep Learning Interpretability by Saliency Guided Training
Figure 3 for Improving Deep Learning Interpretability by Saliency Guided Training
Figure 4 for Improving Deep Learning Interpretability by Saliency Guided Training
Viaarxiv icon

Improving Multimodal Accuracy Through Modality Pre-training and Attention

Add code
Bookmark button
Alert button
Nov 11, 2020
Aya Abdelsalam Ismail, Mahmudul Hasan, Faisal Ishtiaq

Figure 1 for Improving Multimodal Accuracy Through Modality Pre-training and Attention
Figure 2 for Improving Multimodal Accuracy Through Modality Pre-training and Attention
Figure 3 for Improving Multimodal Accuracy Through Modality Pre-training and Attention
Figure 4 for Improving Multimodal Accuracy Through Modality Pre-training and Attention
Viaarxiv icon

Benchmarking Deep Learning Interpretability in Time Series Predictions

Add code
Bookmark button
Alert button
Oct 26, 2020
Aya Abdelsalam Ismail, Mohamed Gunady, Héctor Corrada Bravo, Soheil Feizi

Figure 1 for Benchmarking Deep Learning Interpretability in Time Series Predictions
Figure 2 for Benchmarking Deep Learning Interpretability in Time Series Predictions
Figure 3 for Benchmarking Deep Learning Interpretability in Time Series Predictions
Figure 4 for Benchmarking Deep Learning Interpretability in Time Series Predictions
Viaarxiv icon

Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks

Add code
Bookmark button
Alert button
Oct 27, 2019
Aya Abdelsalam Ismail, Mohamed Gunady, Luiz Pessoa, Héctor Corrada Bravo, Soheil Feizi

Figure 1 for Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks
Figure 2 for Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks
Figure 3 for Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks
Figure 4 for Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks
Viaarxiv icon

Improving Long-Horizon Forecasts with Expectation-Biased LSTM Networks

Add code
Bookmark button
Alert button
Apr 18, 2018
Aya Abdelsalam Ismail, Timothy Wood, Héctor Corrada Bravo

Figure 1 for Improving Long-Horizon Forecasts with Expectation-Biased LSTM Networks
Figure 2 for Improving Long-Horizon Forecasts with Expectation-Biased LSTM Networks
Figure 3 for Improving Long-Horizon Forecasts with Expectation-Biased LSTM Networks
Figure 4 for Improving Long-Horizon Forecasts with Expectation-Biased LSTM Networks
Viaarxiv icon