Alert button
Picture for Juyeon Heo

Juyeon Heo

Alert button

Do Concept Bottleneck Models Obey Locality?

Add code
Bookmark button
Alert button
Jan 02, 2024
Naveen Raman, Mateo Espinosa Zarlenga, Juyeon Heo, Mateja Jamnik

Viaarxiv icon

Estimation of Concept Explanations Should be Uncertainty Aware

Add code
Bookmark button
Alert button
Dec 13, 2023
Vihari Piratla, Juyeon Heo, Sukriti Singh, Adrian Weller

Viaarxiv icon

Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization

Add code
Bookmark button
Alert button
Nov 10, 2023
Weiyang Liu, Zeju Qiu, Yao Feng, Yuliang Xiu, Yuxuan Xue, Longhui Yu, Haiwen Feng, Zhen Liu, Juyeon Heo, Songyou Peng, Yandong Wen, Michael J. Black, Adrian Weller, Bernhard Schölkopf

Figure 1 for Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Figure 2 for Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Figure 3 for Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Figure 4 for Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Viaarxiv icon

Leveraging Task Structures for Improved Identifiability in Neural Network Representations

Add code
Bookmark button
Alert button
Jun 26, 2023
Wenlin Chen, Julien Horwood, Juyeon Heo, José Miguel Hernández-Lobato

Figure 1 for Leveraging Task Structures for Improved Identifiability in Neural Network Representations
Figure 2 for Leveraging Task Structures for Improved Identifiability in Neural Network Representations
Figure 3 for Leveraging Task Structures for Improved Identifiability in Neural Network Representations
Figure 4 for Leveraging Task Structures for Improved Identifiability in Neural Network Representations
Viaarxiv icon

Robust Learning from Explanations

Add code
Bookmark button
Alert button
Mar 11, 2023
Juyeon Heo, Vihari Piratla, Matthew Wicker, Adrian Weller

Figure 1 for Robust Learning from Explanations
Figure 2 for Robust Learning from Explanations
Figure 3 for Robust Learning from Explanations
Figure 4 for Robust Learning from Explanations
Viaarxiv icon

Robust Explanation Constraints for Neural Networks

Add code
Bookmark button
Alert button
Dec 16, 2022
Matthew Wicker, Juyeon Heo, Luca Costabello, Adrian Weller

Figure 1 for Robust Explanation Constraints for Neural Networks
Figure 2 for Robust Explanation Constraints for Neural Networks
Figure 3 for Robust Explanation Constraints for Neural Networks
Figure 4 for Robust Explanation Constraints for Neural Networks
Viaarxiv icon

Towards More Robust Interpretation via Local Gradient Alignment

Add code
Bookmark button
Alert button
Dec 07, 2022
Sunghwan Joo, Seokhyeon Jeong, Juyeon Heo, Adrian Weller, Taesup Moon

Figure 1 for Towards More Robust Interpretation via Local Gradient Alignment
Figure 2 for Towards More Robust Interpretation via Local Gradient Alignment
Figure 3 for Towards More Robust Interpretation via Local Gradient Alignment
Figure 4 for Towards More Robust Interpretation via Local Gradient Alignment
Viaarxiv icon

Fooling Neural Network Interpretations via Adversarial Model Manipulation

Add code
Bookmark button
Alert button
Feb 06, 2019
Juyeon Heo, Sunghwan Joo, Taesup Moon

Figure 1 for Fooling Neural Network Interpretations via Adversarial Model Manipulation
Figure 2 for Fooling Neural Network Interpretations via Adversarial Model Manipulation
Figure 3 for Fooling Neural Network Interpretations via Adversarial Model Manipulation
Figure 4 for Fooling Neural Network Interpretations via Adversarial Model Manipulation
Viaarxiv icon