Alert button
Picture for Leonard Salewski

Leonard Salewski

Alert button

Zero-shot audio captioning with audio-language model guidance and audio context keywords

Add code
Bookmark button
Alert button
Nov 14, 2023
Leonard Salewski, Stefan Fauth, A. Sophia Koepke, Zeynep Akata

Figure 1 for Zero-shot audio captioning with audio-language model guidance and audio context keywords
Figure 2 for Zero-shot audio captioning with audio-language model guidance and audio context keywords
Figure 3 for Zero-shot audio captioning with audio-language model guidance and audio context keywords
Viaarxiv icon

Zero-shot Translation of Attention Patterns in VQA Models to Natural Language

Add code
Bookmark button
Alert button
Nov 08, 2023
Leonard Salewski, A. Sophia Koepke, Hendrik P. A. Lensch, Zeynep Akata

Viaarxiv icon

In-Context Impersonation Reveals Large Language Models' Strengths and Biases

Add code
Bookmark button
Alert button
May 24, 2023
Leonard Salewski, Stephan Alaniz, Isabel Rio-Torto, Eric Schulz, Zeynep Akata

Figure 1 for In-Context Impersonation Reveals Large Language Models' Strengths and Biases
Figure 2 for In-Context Impersonation Reveals Large Language Models' Strengths and Biases
Figure 3 for In-Context Impersonation Reveals Large Language Models' Strengths and Biases
Figure 4 for In-Context Impersonation Reveals Large Language Models' Strengths and Biases
Viaarxiv icon

Diverse Video Captioning by Adaptive Spatio-temporal Attention

Add code
Bookmark button
Alert button
Aug 19, 2022
Zohreh Ghaderi, Leonard Salewski, Hendrik P. A. Lensch

Figure 1 for Diverse Video Captioning by Adaptive Spatio-temporal Attention
Figure 2 for Diverse Video Captioning by Adaptive Spatio-temporal Attention
Figure 3 for Diverse Video Captioning by Adaptive Spatio-temporal Attention
Figure 4 for Diverse Video Captioning by Adaptive Spatio-temporal Attention
Viaarxiv icon

CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations

Add code
Bookmark button
Alert button
Apr 05, 2022
Leonard Salewski, A. Sophia Koepke, Hendrik P. A. Lensch, Zeynep Akata

Figure 1 for CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations
Figure 2 for CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations
Figure 3 for CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations
Figure 4 for CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations
Viaarxiv icon

e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks

Add code
Bookmark button
Alert button
May 08, 2021
Maxime Kayser, Oana-Maria Camburu, Leonard Salewski, Cornelius Emde, Virginie Do, Zeynep Akata, Thomas Lukasiewicz

Figure 1 for e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks
Figure 2 for e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks
Figure 3 for e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks
Figure 4 for e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks
Viaarxiv icon

Relational Generalized Few-Shot Learning

Add code
Bookmark button
Alert button
Jul 22, 2019
Xiahan Shi, Leonard Salewski, Martin Schiegg, Zeynep Akata, Max Welling

Figure 1 for Relational Generalized Few-Shot Learning
Figure 2 for Relational Generalized Few-Shot Learning
Figure 3 for Relational Generalized Few-Shot Learning
Figure 4 for Relational Generalized Few-Shot Learning
Viaarxiv icon