Alert button
Picture for Ehsan K. Ardestani

Ehsan K. Ardestani

Alert button

MTrainS: Improving DLRM training efficiency using heterogeneous memories

Add code
Bookmark button
Alert button
Apr 19, 2023
Hiwot Tadese Kassa, Paul Johnson, Jason Akers, Mrinmoy Ghosh, Andrew Tulloch, Dheevatsa Mudigere, Jongsoo Park, Xing Liu, Ronald Dreslinski, Ehsan K. Ardestani

Figure 1 for MTrainS: Improving DLRM training efficiency using heterogeneous memories
Figure 2 for MTrainS: Improving DLRM training efficiency using heterogeneous memories
Figure 3 for MTrainS: Improving DLRM training efficiency using heterogeneous memories
Figure 4 for MTrainS: Improving DLRM training efficiency using heterogeneous memories
Viaarxiv icon

Building a Performance Model for Deep Learning Recommendation Model Training on GPUs

Add code
Bookmark button
Alert button
Jan 19, 2022
Zhongyi Lin, Louis Feng, Ehsan K. Ardestani, Jaewon Lee, John Lundell, Changkyu Kim, Arun Kejariwal, John D. Owens

Figure 1 for Building a Performance Model for Deep Learning Recommendation Model Training on GPUs
Figure 2 for Building a Performance Model for Deep Learning Recommendation Model Training on GPUs
Figure 3 for Building a Performance Model for Deep Learning Recommendation Model Training on GPUs
Figure 4 for Building a Performance Model for Deep Learning Recommendation Model Training on GPUs
Viaarxiv icon

Supporting Massive DLRM Inference Through Software Defined Memory

Add code
Bookmark button
Alert button
Nov 08, 2021
Ehsan K. Ardestani, Changkyu Kim, Seung Jae Lee, Luoshang Pan, Valmiki Rampersad, Jens Axboe, Banit Agrawal, Fuxun Yu, Ansha Yu, Trung Le, Hector Yuen, Shishir Juluri, Akshat Nanda, Manoj Wodekar, Dheevatsa Mudigere, Krishnakumar Nair, Maxim Naumov, Chris Peterson, Mikhail Smelyanskiy, Vijay Rao

Figure 1 for Supporting Massive DLRM Inference Through Software Defined Memory
Figure 2 for Supporting Massive DLRM Inference Through Software Defined Memory
Figure 3 for Supporting Massive DLRM Inference Through Software Defined Memory
Figure 4 for Supporting Massive DLRM Inference Through Software Defined Memory
Viaarxiv icon

High-performance, Distributed Training of Large-scale Deep Learning Recommendation Models

Add code
Bookmark button
Alert button
Apr 15, 2021
Dheevatsa Mudigere, Yuchen Hao, Jianyu Huang, Andrew Tulloch, Srinivas Sridharan, Xing Liu, Mustafa Ozdal, Jade Nie, Jongsoo Park, Liang Luo, Jie Amy Yang, Leon Gao, Dmytro Ivchenko, Aarti Basant, Yuxi Hu, Jiyan Yang, Ehsan K. Ardestani, Xiaodong Wang, Rakesh Komuravelli, Ching-Hsiang Chu, Serhat Yilmaz, Huayu Li, Jiyuan Qian, Zhuobo Feng, Yinbin Ma, Junjie Yang, Ellie Wen, Hong Li, Lin Yang, Chonglin Sun, Whitney Zhao, Dimitry Melts, Krishna Dhulipala, KR Kishore, Tyler Graf, Assaf Eisenman, Kiran Kumar Matam, Adi Gangidi, Guoqiang Jerry Chen, Manoj Krishnan, Avinash Nayak, Krishnakumar Nair, Bharath Muthiah, Mahmoud khorashadi, Pallab Bhattacharya, Petr Lapukhov, Maxim Naumov, Lin Qiao, Mikhail Smelyanskiy, Bill Jia, Vijay Rao

Figure 1 for High-performance, Distributed Training of Large-scale Deep Learning Recommendation Models
Figure 2 for High-performance, Distributed Training of Large-scale Deep Learning Recommendation Models
Figure 3 for High-performance, Distributed Training of Large-scale Deep Learning Recommendation Models
Figure 4 for High-performance, Distributed Training of Large-scale Deep Learning Recommendation Models
Viaarxiv icon