Picture for Carole-Jean Wu

Carole-Jean Wu

Beyond Efficiency: Scaling AI Sustainably

Add code
Jun 08, 2024
Figure 1 for Beyond Efficiency: Scaling AI Sustainably
Figure 2 for Beyond Efficiency: Scaling AI Sustainably
Figure 3 for Beyond Efficiency: Scaling AI Sustainably
Figure 4 for Beyond Efficiency: Scaling AI Sustainably
Viaarxiv icon

Is Flash Attention Stable?

Add code
May 05, 2024
Figure 1 for Is Flash Attention Stable?
Figure 2 for Is Flash Attention Stable?
Figure 3 for Is Flash Attention Stable?
Figure 4 for Is Flash Attention Stable?
Viaarxiv icon

LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding

Add code
Apr 29, 2024
Figure 1 for LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
Figure 2 for LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
Figure 3 for LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
Figure 4 for LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
Viaarxiv icon

Introducing v0.5 of the AI Safety Benchmark from MLCommons

Add code
Apr 18, 2024
Figure 1 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Figure 2 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Figure 3 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Figure 4 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Viaarxiv icon

Croissant: A Metadata Format for ML-Ready Datasets

Add code
Mar 28, 2024
Viaarxiv icon

CHAI: Clustered Head Attention for Efficient LLM Inference

Add code
Mar 12, 2024
Figure 1 for CHAI: Clustered Head Attention for Efficient LLM Inference
Figure 2 for CHAI: Clustered Head Attention for Efficient LLM Inference
Figure 3 for CHAI: Clustered Head Attention for Efficient LLM Inference
Figure 4 for CHAI: Clustered Head Attention for Efficient LLM Inference
Viaarxiv icon

HeteroSwitch: Characterizing and Taming System-Induced Data Heterogeneity in Federated Learning

Add code
Mar 07, 2024
Figure 1 for HeteroSwitch: Characterizing and Taming System-Induced Data Heterogeneity in Federated Learning
Figure 2 for HeteroSwitch: Characterizing and Taming System-Induced Data Heterogeneity in Federated Learning
Figure 3 for HeteroSwitch: Characterizing and Taming System-Induced Data Heterogeneity in Federated Learning
Figure 4 for HeteroSwitch: Characterizing and Taming System-Induced Data Heterogeneity in Federated Learning
Viaarxiv icon

Generative AI Beyond LLMs: System Implications of Multi-Modal Generation

Add code
Dec 22, 2023
Figure 1 for Generative AI Beyond LLMs: System Implications of Multi-Modal Generation
Figure 2 for Generative AI Beyond LLMs: System Implications of Multi-Modal Generation
Figure 3 for Generative AI Beyond LLMs: System Implications of Multi-Modal Generation
Figure 4 for Generative AI Beyond LLMs: System Implications of Multi-Modal Generation
Viaarxiv icon

Decoding Data Quality via Synthetic Corruptions: Embedding-guided Pruning of Code Data

Add code
Dec 05, 2023
Viaarxiv icon

Data Acquisition: A New Frontier in Data-centric AI

Add code
Nov 22, 2023
Figure 1 for Data Acquisition: A New Frontier in Data-centric AI
Figure 2 for Data Acquisition: A New Frontier in Data-centric AI
Figure 3 for Data Acquisition: A New Frontier in Data-centric AI
Figure 4 for Data Acquisition: A New Frontier in Data-centric AI
Viaarxiv icon