Picture for Hoda Heidari

Hoda Heidari

ETH Zurich

On The Stability of Moral Preferences: A Problem with Computational Elicitation Methods

Add code
Aug 05, 2024
Viaarxiv icon

On the Pros and Cons of Active Learning for Moral Preference Elicitation

Add code
Jul 26, 2024
Viaarxiv icon

Studying Up Public Sector AI: How Networks of Power Relations Shape Agency Decisions Around AI Design and Use

Add code
May 21, 2024
Viaarxiv icon

Red-Teaming for Generative AI: Silver Bullet or Security Theater?

Add code
Jan 29, 2024
Figure 1 for Red-Teaming for Generative AI: Silver Bullet or Security Theater?
Figure 2 for Red-Teaming for Generative AI: Silver Bullet or Security Theater?
Figure 3 for Red-Teaming for Generative AI: Silver Bullet or Security Theater?
Viaarxiv icon

Assessing AI Impact Assessments: A Classroom Study

Add code
Nov 19, 2023
Figure 1 for Assessing AI Impact Assessments: A Classroom Study
Figure 2 for Assessing AI Impact Assessments: A Classroom Study
Figure 3 for Assessing AI Impact Assessments: A Classroom Study
Figure 4 for Assessing AI Impact Assessments: A Classroom Study
Viaarxiv icon

RELand: Risk Estimation of Landmines via Interpretable Invariant Risk Minimization

Add code
Nov 06, 2023
Figure 1 for RELand: Risk Estimation of Landmines via Interpretable Invariant Risk Minimization
Figure 2 for RELand: Risk Estimation of Landmines via Interpretable Invariant Risk Minimization
Figure 3 for RELand: Risk Estimation of Landmines via Interpretable Invariant Risk Minimization
Figure 4 for RELand: Risk Estimation of Landmines via Interpretable Invariant Risk Minimization
Viaarxiv icon

The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements

Add code
Oct 10, 2023
Figure 1 for The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements
Figure 2 for The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements
Figure 3 for The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements
Figure 4 for The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements
Viaarxiv icon

Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools

Add code
Sep 29, 2023
Figure 1 for Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools
Figure 2 for Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools
Figure 3 for Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools
Figure 4 for Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools
Viaarxiv icon

Fine-Tuning Games: Bargaining and Adaptation for General-Purpose Models

Add code
Aug 11, 2023
Figure 1 for Fine-Tuning Games: Bargaining and Adaptation for General-Purpose Models
Figure 2 for Fine-Tuning Games: Bargaining and Adaptation for General-Purpose Models
Figure 3 for Fine-Tuning Games: Bargaining and Adaptation for General-Purpose Models
Figure 4 for Fine-Tuning Games: Bargaining and Adaptation for General-Purpose Models
Viaarxiv icon

Moral Machine or Tyranny of the Majority?

Add code
May 27, 2023
Figure 1 for Moral Machine or Tyranny of the Majority?
Figure 2 for Moral Machine or Tyranny of the Majority?
Figure 3 for Moral Machine or Tyranny of the Majority?
Figure 4 for Moral Machine or Tyranny of the Majority?
Viaarxiv icon