Picture for Mark Lee

Mark Lee

University of Sheffield

Robust Bias Evaluation with FilBBQ: A Filipino Bias Benchmark for Question-Answering Language Models

Add code
Feb 16, 2026
Viaarxiv icon

Bias Attribution in Filipino Language Models: Extending a Bias Interpretability Metric for Application on Agglutinative Languages

Add code
Jun 08, 2025
Figure 1 for Bias Attribution in Filipino Language Models: Extending a Bias Interpretability Metric for Application on Agglutinative Languages
Figure 2 for Bias Attribution in Filipino Language Models: Extending a Bias Interpretability Metric for Application on Agglutinative Languages
Figure 3 for Bias Attribution in Filipino Language Models: Extending a Bias Interpretability Metric for Application on Agglutinative Languages
Figure 4 for Bias Attribution in Filipino Language Models: Extending a Bias Interpretability Metric for Application on Agglutinative Languages
Viaarxiv icon

Delta Decompression for MoE-based LLMs Compression

Add code
Feb 24, 2025
Figure 1 for Delta Decompression for MoE-based LLMs Compression
Figure 2 for Delta Decompression for MoE-based LLMs Compression
Figure 3 for Delta Decompression for MoE-based LLMs Compression
Figure 4 for Delta Decompression for MoE-based LLMs Compression
Viaarxiv icon

Filipino Benchmarks for Measuring Sexist and Homophobic Bias in Multilingual Language Models from Southeast Asia

Add code
Dec 10, 2024
Figure 1 for Filipino Benchmarks for Measuring Sexist and Homophobic Bias in Multilingual Language Models from Southeast Asia
Figure 2 for Filipino Benchmarks for Measuring Sexist and Homophobic Bias in Multilingual Language Models from Southeast Asia
Figure 3 for Filipino Benchmarks for Measuring Sexist and Homophobic Bias in Multilingual Language Models from Southeast Asia
Figure 4 for Filipino Benchmarks for Measuring Sexist and Homophobic Bias in Multilingual Language Models from Southeast Asia
Viaarxiv icon

A Novel Interpretability Metric for Explaining Bias in Language Models: Applications on Multilingual Models from Southeast Asia

Add code
Oct 20, 2024
Figure 1 for A Novel Interpretability Metric for Explaining Bias in Language Models: Applications on Multilingual Models from Southeast Asia
Figure 2 for A Novel Interpretability Metric for Explaining Bias in Language Models: Applications on Multilingual Models from Southeast Asia
Figure 3 for A Novel Interpretability Metric for Explaining Bias in Language Models: Applications on Multilingual Models from Southeast Asia
Figure 4 for A Novel Interpretability Metric for Explaining Bias in Language Models: Applications on Multilingual Models from Southeast Asia
Viaarxiv icon

Apple Intelligence Foundation Language Models

Add code
Jul 29, 2024
Figure 1 for Apple Intelligence Foundation Language Models
Figure 2 for Apple Intelligence Foundation Language Models
Figure 3 for Apple Intelligence Foundation Language Models
Figure 4 for Apple Intelligence Foundation Language Models
Viaarxiv icon

Revisiting MoE and Dense Speed-Accuracy Comparisons for LLM Training

Add code
May 23, 2024
Figure 1 for Revisiting MoE and Dense Speed-Accuracy Comparisons for LLM Training
Figure 2 for Revisiting MoE and Dense Speed-Accuracy Comparisons for LLM Training
Figure 3 for Revisiting MoE and Dense Speed-Accuracy Comparisons for LLM Training
Figure 4 for Revisiting MoE and Dense Speed-Accuracy Comparisons for LLM Training
Viaarxiv icon

MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training

Add code
Mar 22, 2024
Figure 1 for MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training
Figure 2 for MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training
Figure 3 for MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training
Figure 4 for MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training
Viaarxiv icon

Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text

Add code
Mar 07, 2024
Figure 1 for Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text
Figure 2 for Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text
Figure 3 for Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text
Figure 4 for Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text
Viaarxiv icon

The channel-spatial attention-based vision transformer network for automated, accurate prediction of crop nitrogen status from UAV imagery

Add code
Nov 12, 2021
Figure 1 for The channel-spatial attention-based vision transformer network for automated, accurate prediction of crop nitrogen status from UAV imagery
Figure 2 for The channel-spatial attention-based vision transformer network for automated, accurate prediction of crop nitrogen status from UAV imagery
Figure 3 for The channel-spatial attention-based vision transformer network for automated, accurate prediction of crop nitrogen status from UAV imagery
Figure 4 for The channel-spatial attention-based vision transformer network for automated, accurate prediction of crop nitrogen status from UAV imagery
Viaarxiv icon