Alert button
Picture for Manuel Tran

Manuel Tran

Alert button

B-Cos Aligned Transformers Learn Human-Interpretable Features

Add code
Bookmark button
Alert button
Jan 18, 2024
Manuel Tran, Amal Lahiani, Yashin Dicente Cid, Melanie Boxberg, Peter Lienemann, Christian Matek, Sophia J. Wagner, Fabian J. Theis, Eldad Klaiman, Tingying Peng

Viaarxiv icon

Training Transitive and Commutative Multimodal Transformers with LoReTTa

Add code
Bookmark button
Alert button
May 23, 2023
Manuel Tran, Amal Lahiani, Yashin Dicente Cid, Fabian J. Theis, Tingying Peng, Eldad Klaiman

Figure 1 for Training Transitive and Commutative Multimodal Transformers with LoReTTa
Figure 2 for Training Transitive and Commutative Multimodal Transformers with LoReTTa
Figure 3 for Training Transitive and Commutative Multimodal Transformers with LoReTTa
Figure 4 for Training Transitive and Commutative Multimodal Transformers with LoReTTa
Viaarxiv icon

S5CL: Unifying Fully-Supervised, Self-Supervised, and Semi-Supervised Learning Through Hierarchical Contrastive Learning

Add code
Bookmark button
Alert button
Mar 14, 2022
Manuel Tran, Sophia J. Wagner, Melanie Boxberg, Tingying Peng

Figure 1 for S5CL: Unifying Fully-Supervised, Self-Supervised, and Semi-Supervised Learning Through Hierarchical Contrastive Learning
Figure 2 for S5CL: Unifying Fully-Supervised, Self-Supervised, and Semi-Supervised Learning Through Hierarchical Contrastive Learning
Figure 3 for S5CL: Unifying Fully-Supervised, Self-Supervised, and Semi-Supervised Learning Through Hierarchical Contrastive Learning
Figure 4 for S5CL: Unifying Fully-Supervised, Self-Supervised, and Semi-Supervised Learning Through Hierarchical Contrastive Learning
Viaarxiv icon