Picture for Chris Bryan

Chris Bryan

InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillation

Add code
Jun 25, 2024
Viaarxiv icon

ASAP: Interpretable Analysis and Summarization of AI-generated Image Patterns at Scale

Add code
Apr 03, 2024
Viaarxiv icon

InterVLS: Interactive Model Understanding and Improvement with Vision-Language Surrogates

Add code
Nov 06, 2023
Figure 1 for InterVLS: Interactive Model Understanding and Improvement with Vision-Language Surrogates
Figure 2 for InterVLS: Interactive Model Understanding and Improvement with Vision-Language Surrogates
Figure 3 for InterVLS: Interactive Model Understanding and Improvement with Vision-Language Surrogates
Figure 4 for InterVLS: Interactive Model Understanding and Improvement with Vision-Language Surrogates
Viaarxiv icon

LINGO : Visually Debiasing Natural Language Instructions to Support Task Diversity

Add code
Apr 12, 2023
Viaarxiv icon

Real-Time Visual Feedback to Guide Benchmark Creation: A Human-and-Metric-in-the-Loop Workflow

Add code
Feb 09, 2023
Viaarxiv icon

A Survey of Parameters Associated with the Quality of Benchmarks in NLP

Add code
Oct 14, 2022
Figure 1 for A Survey of Parameters Associated with the Quality of Benchmarks in NLP
Figure 2 for A Survey of Parameters Associated with the Quality of Benchmarks in NLP
Figure 3 for A Survey of Parameters Associated with the Quality of Benchmarks in NLP
Figure 4 for A Survey of Parameters Associated with the Quality of Benchmarks in NLP
Viaarxiv icon

Hardness of Samples Need to be Quantified for a Reliable Evaluation System: Exploring Potential Opportunities with a New Task

Add code
Oct 14, 2022
Figure 1 for Hardness of Samples Need to be Quantified for a Reliable Evaluation System: Exploring Potential Opportunities with a New Task
Figure 2 for Hardness of Samples Need to be Quantified for a Reliable Evaluation System: Exploring Potential Opportunities with a New Task
Figure 3 for Hardness of Samples Need to be Quantified for a Reliable Evaluation System: Exploring Potential Opportunities with a New Task
Figure 4 for Hardness of Samples Need to be Quantified for a Reliable Evaluation System: Exploring Potential Opportunities with a New Task
Viaarxiv icon

DQI: A Guide to Benchmark Evaluation

Add code
Aug 10, 2020
Figure 1 for DQI: A Guide to Benchmark Evaluation
Figure 2 for DQI: A Guide to Benchmark Evaluation
Figure 3 for DQI: A Guide to Benchmark Evaluation
Figure 4 for DQI: A Guide to Benchmark Evaluation
Viaarxiv icon

Our Evaluation Metric Needs an Update to Encourage Generalization

Add code
Jul 14, 2020
Figure 1 for Our Evaluation Metric Needs an Update to Encourage Generalization
Figure 2 for Our Evaluation Metric Needs an Update to Encourage Generalization
Figure 3 for Our Evaluation Metric Needs an Update to Encourage Generalization
Figure 4 for Our Evaluation Metric Needs an Update to Encourage Generalization
Viaarxiv icon

DQI: Measuring Data Quality in NLP

Add code
May 02, 2020
Figure 1 for DQI: Measuring Data Quality in NLP
Figure 2 for DQI: Measuring Data Quality in NLP
Figure 3 for DQI: Measuring Data Quality in NLP
Figure 4 for DQI: Measuring Data Quality in NLP
Viaarxiv icon