Picture for Levent Sagun

Levent Sagun

Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction

Add code
Sep 29, 2023
Figure 1 for Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction
Figure 2 for Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction
Figure 3 for Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction
Figure 4 for Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction
Viaarxiv icon

Weisfeiler and Lehman Go Measurement Modeling: Probing the Validity of the WL Test

Add code
Jul 11, 2023
Figure 1 for Weisfeiler and Lehman Go Measurement Modeling: Probing the Validity of the WL Test
Figure 2 for Weisfeiler and Lehman Go Measurement Modeling: Probing the Validity of the WL Test
Figure 3 for Weisfeiler and Lehman Go Measurement Modeling: Probing the Validity of the WL Test
Figure 4 for Weisfeiler and Lehman Go Measurement Modeling: Probing the Validity of the WL Test
Viaarxiv icon

Simplicity Bias Leads to Amplified Performance Disparities

Add code
Dec 13, 2022
Figure 1 for Simplicity Bias Leads to Amplified Performance Disparities
Figure 2 for Simplicity Bias Leads to Amplified Performance Disparities
Figure 3 for Simplicity Bias Leads to Amplified Performance Disparities
Figure 4 for Simplicity Bias Leads to Amplified Performance Disparities
Viaarxiv icon

Measuring and signing fairness as performance under multiple stakeholder distributions

Add code
Jul 20, 2022
Viaarxiv icon

Understanding out-of-distribution accuracies through quantifying difficulty of test samples

Add code
Mar 28, 2022
Figure 1 for Understanding out-of-distribution accuracies through quantifying difficulty of test samples
Figure 2 for Understanding out-of-distribution accuracies through quantifying difficulty of test samples
Figure 3 for Understanding out-of-distribution accuracies through quantifying difficulty of test samples
Figure 4 for Understanding out-of-distribution accuracies through quantifying difficulty of test samples
Viaarxiv icon

Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision

Add code
Feb 22, 2022
Figure 1 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Figure 2 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Figure 3 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Figure 4 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Viaarxiv icon

Fairness Indicators for Systematic Assessments of Visual Feature Extractors

Add code
Feb 15, 2022
Figure 1 for Fairness Indicators for Systematic Assessments of Visual Feature Extractors
Figure 2 for Fairness Indicators for Systematic Assessments of Visual Feature Extractors
Figure 3 for Fairness Indicators for Systematic Assessments of Visual Feature Extractors
Figure 4 for Fairness Indicators for Systematic Assessments of Visual Feature Extractors
Viaarxiv icon

Transformed CNNs: recasting pre-trained convolutional layers with self-attention

Add code
Jun 10, 2021
Figure 1 for Transformed CNNs: recasting pre-trained convolutional layers with self-attention
Figure 2 for Transformed CNNs: recasting pre-trained convolutional layers with self-attention
Figure 3 for Transformed CNNs: recasting pre-trained convolutional layers with self-attention
Figure 4 for Transformed CNNs: recasting pre-trained convolutional layers with self-attention
Viaarxiv icon

ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases

Add code
Mar 19, 2021
Figure 1 for ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases
Figure 2 for ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases
Figure 3 for ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases
Figure 4 for ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases
Viaarxiv icon

More data or more parameters? Investigating the effect of data structure on generalization

Add code
Mar 09, 2021
Figure 1 for More data or more parameters? Investigating the effect of data structure on generalization
Figure 2 for More data or more parameters? Investigating the effect of data structure on generalization
Figure 3 for More data or more parameters? Investigating the effect of data structure on generalization
Figure 4 for More data or more parameters? Investigating the effect of data structure on generalization
Viaarxiv icon