Alert button
Picture for Francesco Ventura

Francesco Ventura

Alert button

Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features

Add code
Bookmark button
Alert button
Jun 12, 2021
Francesco Ventura, Salvatore Greco, Daniele Apiletti, Tania Cerquitelli

Figure 1 for Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features
Figure 2 for Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features
Figure 3 for Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features
Figure 4 for Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features
Viaarxiv icon

What's in the box? Explaining the black-box model through an evaluation of its interpretable features

Add code
Bookmark button
Alert button
Jul 31, 2019
Francesco Ventura, Tania Cerquitelli

Figure 1 for What's in the box? Explaining the black-box model through an evaluation of its interpretable features
Figure 2 for What's in the box? Explaining the black-box model through an evaluation of its interpretable features
Figure 3 for What's in the box? Explaining the black-box model through an evaluation of its interpretable features
Figure 4 for What's in the box? Explaining the black-box model through an evaluation of its interpretable features
Viaarxiv icon

Automating concept-drift detection by self-evaluating predictive model degradation

Add code
Bookmark button
Alert button
Jul 18, 2019
Tania Cerquitelli, Stefano Proto, Francesco Ventura, Daniele Apiletti, Elena Baralis

Figure 1 for Automating concept-drift detection by self-evaluating predictive model degradation
Figure 2 for Automating concept-drift detection by self-evaluating predictive model degradation
Figure 3 for Automating concept-drift detection by self-evaluating predictive model degradation
Figure 4 for Automating concept-drift detection by self-evaluating predictive model degradation
Viaarxiv icon