Alert button
Picture for Siddharth Samsi

Siddharth Samsi

Alert button

Sustainable Supercomputing for AI: GPU Power Capping at HPC Scale

Add code
Bookmark button
Alert button
Feb 25, 2024
Dan Zhao, Siddharth Samsi, Joseph McDonald, Baolin Li, David Bestor, Michael Jones, Devesh Tiwari, Vijay Gadepally

Viaarxiv icon

A Benchmark Dataset for Tornado Detection and Prediction using Full-Resolution Polarimetric Weather Radar Data

Add code
Bookmark button
Alert button
Jan 26, 2024
Mark S. Veillette, James M. Kurdzo, Phillip M. Stepanian, John Y. N. Cho, Siddharth Samsi, Joseph McDonald

Viaarxiv icon

Lincoln AI Computing Survey (LAICS) Update

Add code
Bookmark button
Alert button
Oct 13, 2023
Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner

Viaarxiv icon

From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference

Add code
Bookmark button
Alert button
Oct 04, 2023
Siddharth Samsi, Dan Zhao, Joseph McDonald, Baolin Li, Adam Michaleas, Michael Jones, William Bergeron, Jeremy Kepner, Devesh Tiwari, Vijay Gadepally

Viaarxiv icon

A Green(er) World for A.I

Add code
Bookmark button
Alert button
Jan 27, 2023
Dan Zhao, Nathan C. Frey, Joseph McDonald, Matthew Hubbell, David Bestor, Michael Jones, Andrew Prout, Vijay Gadepally, Siddharth Samsi

Figure 1 for A Green(er) World for A.I
Figure 2 for A Green(er) World for A.I
Figure 3 for A Green(er) World for A.I
Figure 4 for A Green(er) World for A.I
Viaarxiv icon

Building Heterogeneous Cloud System for Machine Learning Inference

Add code
Bookmark button
Alert button
Oct 15, 2022
Baolin Li, Siddharth Samsi, Vijay Gadepally, Devesh Tiwari

Figure 1 for Building Heterogeneous Cloud System for Machine Learning Inference
Figure 2 for Building Heterogeneous Cloud System for Machine Learning Inference
Figure 3 for Building Heterogeneous Cloud System for Machine Learning Inference
Figure 4 for Building Heterogeneous Cloud System for Machine Learning Inference
Viaarxiv icon

An Evaluation of Low Overhead Time Series Preprocessing Techniques for Downstream Machine Learning

Add code
Bookmark button
Alert button
Sep 12, 2022
Matthew L. Weiss, Joseph McDonald, David Bestor, Charles Yee, Daniel Edelman, Michael Jones, Andrew Prout, Andrew Bowne, Lindsey McEvoy, Vijay Gadepally, Siddharth Samsi

Figure 1 for An Evaluation of Low Overhead Time Series Preprocessing Techniques for Downstream Machine Learning
Viaarxiv icon

Developing a Series of AI Challenges for the United States Department of the Air Force

Add code
Bookmark button
Alert button
Jul 14, 2022
Vijay Gadepally, Gregory Angelides, Andrei Barbu, Andrew Bowne, Laura J. Brattain, Tamara Broderick, Armando Cabrera, Glenn Carl, Ronisha Carter, Miriam Cha, Emilie Cowen, Jesse Cummings, Bill Freeman, James Glass, Sam Goldberg, Mark Hamilton, Thomas Heldt, Kuan Wei Huang, Phillip Isola, Boris Katz, Jamie Koerner, Yen-Chen Lin, David Mayo, Kyle McAlpin, Taylor Perron, Jean Piou, Hrishikesh M. Rao, Hayley Reynolds, Kaira Samuel, Siddharth Samsi, Morgan Schmidt, Leslie Shing, Olga Simek, Brandon Swenson, Vivienne Sze, Jonathan Taylor, Paul Tylkin, Mark Veillette, Matthew L Weiss, Allan Wollaber, Sophia Yuditskaya, Jeremy Kepner

Figure 1 for Developing a Series of AI Challenges for the United States Department of the Air Force
Figure 2 for Developing a Series of AI Challenges for the United States Department of the Air Force
Figure 3 for Developing a Series of AI Challenges for the United States Department of the Air Force
Figure 4 for Developing a Series of AI Challenges for the United States Department of the Air Force
Viaarxiv icon

Great Power, Great Responsibility: Recommendations for Reducing Energy for Training Language Models

Add code
Bookmark button
Alert button
May 19, 2022
Joseph McDonald, Baolin Li, Nathan Frey, Devesh Tiwari, Vijay Gadepally, Siddharth Samsi

Figure 1 for Great Power, Great Responsibility: Recommendations for Reducing Energy for Training Language Models
Figure 2 for Great Power, Great Responsibility: Recommendations for Reducing Energy for Training Language Models
Figure 3 for Great Power, Great Responsibility: Recommendations for Reducing Energy for Training Language Models
Figure 4 for Great Power, Great Responsibility: Recommendations for Reducing Energy for Training Language Models
Viaarxiv icon