Picture for Johannes Welbl

Johannes Welbl

UCL

Consensus, dissensus and synergy between clinicians and specialist foundation models in radiology report generation

Add code
Dec 06, 2023
Figure 1 for Consensus, dissensus and synergy between clinicians and specialist foundation models in radiology report generation
Figure 2 for Consensus, dissensus and synergy between clinicians and specialist foundation models in radiology report generation
Figure 3 for Consensus, dissensus and synergy between clinicians and specialist foundation models in radiology report generation
Figure 4 for Consensus, dissensus and synergy between clinicians and specialist foundation models in radiology report generation
Viaarxiv icon

Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models

Add code
Jun 16, 2022
Figure 1 for Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Figure 2 for Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Figure 3 for Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Viaarxiv icon

Training Compute-Optimal Large Language Models

Add code
Mar 29, 2022
Figure 1 for Training Compute-Optimal Large Language Models
Figure 2 for Training Compute-Optimal Large Language Models
Figure 3 for Training Compute-Optimal Large Language Models
Figure 4 for Training Compute-Optimal Large Language Models
Viaarxiv icon

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

Add code
Dec 08, 2021
Figure 1 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Figure 2 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Figure 3 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Figure 4 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Viaarxiv icon

Challenges in Detoxifying Language Models

Add code
Sep 15, 2021
Figure 1 for Challenges in Detoxifying Language Models
Figure 2 for Challenges in Detoxifying Language Models
Figure 3 for Challenges in Detoxifying Language Models
Figure 4 for Challenges in Detoxifying Language Models
Viaarxiv icon

Evaluating the Apperception Engine

Add code
Jul 09, 2020
Figure 1 for Evaluating the Apperception Engine
Figure 2 for Evaluating the Apperception Engine
Figure 3 for Evaluating the Apperception Engine
Figure 4 for Evaluating the Apperception Engine
Viaarxiv icon

Undersensitivity in Neural Reading Comprehension

Add code
Feb 15, 2020
Figure 1 for Undersensitivity in Neural Reading Comprehension
Figure 2 for Undersensitivity in Neural Reading Comprehension
Figure 3 for Undersensitivity in Neural Reading Comprehension
Figure 4 for Undersensitivity in Neural Reading Comprehension
Viaarxiv icon

Beat the AI: Investigating Adversarial Human Annotations for Reading Comprehension

Add code
Feb 02, 2020
Figure 1 for Beat the AI: Investigating Adversarial Human Annotations for Reading Comprehension
Figure 2 for Beat the AI: Investigating Adversarial Human Annotations for Reading Comprehension
Figure 3 for Beat the AI: Investigating Adversarial Human Annotations for Reading Comprehension
Figure 4 for Beat the AI: Investigating Adversarial Human Annotations for Reading Comprehension
Viaarxiv icon

Reducing Sentiment Bias in Language Models via Counterfactual Evaluation

Add code
Nov 08, 2019
Figure 1 for Reducing Sentiment Bias in Language Models via Counterfactual Evaluation
Figure 2 for Reducing Sentiment Bias in Language Models via Counterfactual Evaluation
Figure 3 for Reducing Sentiment Bias in Language Models via Counterfactual Evaluation
Figure 4 for Reducing Sentiment Bias in Language Models via Counterfactual Evaluation
Viaarxiv icon

Making sense of sensory input

Add code
Oct 05, 2019
Figure 1 for Making sense of sensory input
Figure 2 for Making sense of sensory input
Figure 3 for Making sense of sensory input
Figure 4 for Making sense of sensory input
Viaarxiv icon