Picture for Kevin Klyman

Kevin Klyman

CoPE: A Small Language Model for Steerable and Scalable Content Labeling

Add code
Dec 19, 2025
Figure 1 for CoPE: A Small Language Model for Steerable and Scalable Content Labeling
Figure 2 for CoPE: A Small Language Model for Steerable and Scalable Content Labeling
Figure 3 for CoPE: A Small Language Model for Steerable and Scalable Content Labeling
Figure 4 for CoPE: A Small Language Model for Steerable and Scalable Content Labeling
Viaarxiv icon

The 2025 Foundation Model Transparency Index

Add code
Dec 11, 2025
Figure 1 for The 2025 Foundation Model Transparency Index
Figure 2 for The 2025 Foundation Model Transparency Index
Figure 3 for The 2025 Foundation Model Transparency Index
Figure 4 for The 2025 Foundation Model Transparency Index
Viaarxiv icon

Who Evaluates AI's Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations

Add code
Nov 06, 2025
Figure 1 for Who Evaluates AI's Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations
Figure 2 for Who Evaluates AI's Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations
Figure 3 for Who Evaluates AI's Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations
Figure 4 for Who Evaluates AI's Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations
Viaarxiv icon

Do AI Companies Make Good on Voluntary Commitments to the White House?

Add code
Aug 11, 2025
Viaarxiv icon

New Tools are Needed for Tracking Adherence to AI Model Behavioral Use Clauses

Add code
May 28, 2025
Figure 1 for New Tools are Needed for Tracking Adherence to AI Model Behavioral Use Clauses
Figure 2 for New Tools are Needed for Tracking Adherence to AI Model Behavioral Use Clauses
Figure 3 for New Tools are Needed for Tracking Adherence to AI Model Behavioral Use Clauses
Figure 4 for New Tools are Needed for Tracking Adherence to AI Model Behavioral Use Clauses
Viaarxiv icon

Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers

Add code
Apr 25, 2025
Viaarxiv icon

Bridging the Data Provenance Gap Across Text, Speech and Video

Add code
Dec 19, 2024
Figure 1 for Bridging the Data Provenance Gap Across Text, Speech and Video
Figure 2 for Bridging the Data Provenance Gap Across Text, Speech and Video
Figure 3 for Bridging the Data Provenance Gap Across Text, Speech and Video
Figure 4 for Bridging the Data Provenance Gap Across Text, Speech and Video
Viaarxiv icon

Language model developers should report train-test overlap

Add code
Oct 10, 2024
Figure 1 for Language model developers should report train-test overlap
Figure 2 for Language model developers should report train-test overlap
Viaarxiv icon

Consent in Crisis: The Rapid Decline of the AI Data Commons

Add code
Jul 24, 2024
Figure 1 for Consent in Crisis: The Rapid Decline of the AI Data Commons
Figure 2 for Consent in Crisis: The Rapid Decline of the AI Data Commons
Figure 3 for Consent in Crisis: The Rapid Decline of the AI Data Commons
Figure 4 for Consent in Crisis: The Rapid Decline of the AI Data Commons
Viaarxiv icon

The Foundation Model Transparency Index v1.1: May 2024

Add code
Jul 17, 2024
Viaarxiv icon