Alert button
Picture for Martin Tutek

Martin Tutek

Alert button

Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs

Add code
Bookmark button
Alert button
Jan 18, 2024
Haritz Puerto, Martin Tutek, Somak Aditya, Xiaodan Zhu, Iryna Gurevych

Viaarxiv icon

Out-of-Distribution Detection by Leveraging Between-Layer Transformation Smoothness

Add code
Bookmark button
Alert button
Oct 04, 2023
Fran Jelenić, Josip Jukić, Martin Tutek, Mate Puljiz, Jan Šnajder

Figure 1 for Out-of-Distribution Detection by Leveraging Between-Layer Transformation Smoothness
Figure 2 for Out-of-Distribution Detection by Leveraging Between-Layer Transformation Smoothness
Figure 3 for Out-of-Distribution Detection by Leveraging Between-Layer Transformation Smoothness
Figure 4 for Out-of-Distribution Detection by Leveraging Between-Layer Transformation Smoothness
Viaarxiv icon

CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain Performance and Calibration

Add code
Bookmark button
Alert button
Sep 15, 2023
Rachneet Sachdeva, Martin Tutek, Iryna Gurevych

Figure 1 for CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain Performance and Calibration
Figure 2 for CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain Performance and Calibration
Figure 3 for CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain Performance and Calibration
Figure 4 for CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain Performance and Calibration
Viaarxiv icon

Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods

Add code
Bookmark button
Alert button
Nov 15, 2022
Josip Jukić, Martin Tutek, Jan Šnajder

Figure 1 for Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods
Figure 2 for Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods
Figure 3 for Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods
Figure 4 for Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods
Viaarxiv icon

Staying True to Your Word: (How) Can Attention Become Explanation?

Add code
Bookmark button
Alert button
May 19, 2020
Martin Tutek, Jan Šnajder

Figure 1 for Staying True to Your Word: (How) Can Attention Become Explanation?
Figure 2 for Staying True to Your Word: (How) Can Attention Become Explanation?
Figure 3 for Staying True to Your Word: (How) Can Attention Become Explanation?
Figure 4 for Staying True to Your Word: (How) Can Attention Become Explanation?
Viaarxiv icon

Iterative Recursive Attention Model for Interpretable Sequence Classification

Add code
Bookmark button
Alert button
Aug 30, 2018
Martin Tutek, Jan Šnajder

Figure 1 for Iterative Recursive Attention Model for Interpretable Sequence Classification
Figure 2 for Iterative Recursive Attention Model for Interpretable Sequence Classification
Figure 3 for Iterative Recursive Attention Model for Interpretable Sequence Classification
Figure 4 for Iterative Recursive Attention Model for Interpretable Sequence Classification
Viaarxiv icon