Alert button
Picture for Vinitra Swamy

Vinitra Swamy

Alert button

InterpretCC: Conditional Computation for Inherently Interpretable Neural Networks

Add code
Bookmark button
Alert button
Feb 05, 2024
Vinitra Swamy, Julian Blackwell, Jibril Frej, Martin Jaggi, Tanja Käser

Viaarxiv icon

MEDITRON-70B: Scaling Medical Pretraining for Large Language Models

Add code
Bookmark button
Alert button
Nov 27, 2023
Zeming Chen, Alejandro Hernández Cano, Angelika Romanou, Antoine Bonnet, Kyle Matoba, Francesco Salvi, Matteo Pagliardini, Simin Fan, Andreas Köpf, Amirkeivan Mohtashami, Alexandre Sallinen, Alireza Sakhaeirad, Vinitra Swamy, Igor Krawczuk, Deniz Bayazit, Axel Marmet, Syrielle Montariol, Mary-Anne Hartley, Martin Jaggi, Antoine Bosselut

Viaarxiv icon

Unraveling Downstream Gender Bias from Large Language Models: A Study on AI Educational Writing Assistance

Add code
Bookmark button
Alert button
Nov 06, 2023
Thiemo Wambsganss, Xiaotian Su, Vinitra Swamy, Seyed Parsa Neshaei, Roman Rietsche, Tanja Käser

Viaarxiv icon

MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks

Add code
Bookmark button
Alert button
Sep 25, 2023
Vinitra Swamy, Malika Satayeva, Jibril Frej, Thierry Bossy, Thijs Vogels, Martin Jaggi, Tanja Käser, Mary-Anne Hartley

Figure 1 for MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks
Figure 2 for MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks
Figure 3 for MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks
Figure 4 for MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks
Viaarxiv icon

The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations

Add code
Bookmark button
Alert button
Jul 01, 2023
Vinitra Swamy, Jibril Frej, Tanja Käser

Figure 1 for The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations
Figure 2 for The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations
Viaarxiv icon

Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design

Add code
Bookmark button
Alert button
Dec 26, 2022
Vinitra Swamy, Sijia Du, Mirko Marras, Tanja Käser

Figure 1 for Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design
Figure 2 for Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design
Figure 3 for Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design
Figure 4 for Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design
Viaarxiv icon

Ripple: Concept-Based Interpretation for Raw Time Series Models in Education

Add code
Bookmark button
Alert button
Dec 13, 2022
Mohammad Asadi, Vinitra Swamy, Jibril Frej, Julien Vignoud, Mirko Marras, Tanja Käser

Figure 1 for Ripple: Concept-Based Interpretation for Raw Time Series Models in Education
Figure 2 for Ripple: Concept-Based Interpretation for Raw Time Series Models in Education
Figure 3 for Ripple: Concept-Based Interpretation for Raw Time Series Models in Education
Figure 4 for Ripple: Concept-Based Interpretation for Raw Time Series Models in Education
Viaarxiv icon