Alert button
Picture for Zhifeng Lin

Zhifeng Lin

Alert button

Learning Granularity-Unified Representations for Text-to-Image Person Re-identification

Add code
Bookmark button
Alert button
Jul 16, 2022
Zhiyin Shao, Xinyu Zhang, Meng Fang, Zhifeng Lin, Jian Wang, Changxing Ding

Figure 1 for Learning Granularity-Unified Representations for Text-to-Image Person Re-identification
Figure 2 for Learning Granularity-Unified Representations for Text-to-Image Person Re-identification
Figure 3 for Learning Granularity-Unified Representations for Text-to-Image Person Re-identification
Figure 4 for Learning Granularity-Unified Representations for Text-to-Image Person Re-identification
Viaarxiv icon

Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments

Add code
Bookmark button
Alert button
Dec 07, 2019
Krishna Giri Narra, Zhifeng Lin, Yongqin Wang, Keshav Balasubramaniam, Murali Annavaram

Figure 1 for Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments
Figure 2 for Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments
Figure 3 for Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments
Figure 4 for Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments
Viaarxiv icon

Train Where the Data is: A Case for Bandwidth Efficient Coded Training

Add code
Bookmark button
Alert button
Oct 22, 2019
Zhifeng Lin, Krishna Giri Narra, Mingchao Yu, Salman Avestimehr, Murali Annavaram

Figure 1 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Figure 2 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Figure 3 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Figure 4 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Viaarxiv icon

Collage Inference: Achieving low tail latency during distributed image classification using coded redundancy models

Add code
Bookmark button
Alert button
Jun 05, 2019
Krishna Narra, Zhifeng Lin, Ganesh Ananthanarayanan, Salman Avestimehr, Murali Annavaram

Figure 1 for Collage Inference: Achieving low tail latency during distributed image classification using coded redundancy models
Figure 2 for Collage Inference: Achieving low tail latency during distributed image classification using coded redundancy models
Figure 3 for Collage Inference: Achieving low tail latency during distributed image classification using coded redundancy models
Figure 4 for Collage Inference: Achieving low tail latency during distributed image classification using coded redundancy models
Viaarxiv icon

Collage Inference: Tolerating Stragglers in Distributed Neural Network Inference using Coding

Add code
Bookmark button
Alert button
Apr 27, 2019
Krishna Giri Narra, Zhifeng Lin, Ganesh Ananthanarayanan, Salman Avestimehr, Murali Annavaram

Figure 1 for Collage Inference: Tolerating Stragglers in Distributed Neural Network Inference using Coding
Figure 2 for Collage Inference: Tolerating Stragglers in Distributed Neural Network Inference using Coding
Figure 3 for Collage Inference: Tolerating Stragglers in Distributed Neural Network Inference using Coding
Figure 4 for Collage Inference: Tolerating Stragglers in Distributed Neural Network Inference using Coding
Viaarxiv icon

GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training

Add code
Bookmark button
Alert button
Nov 08, 2018
Mingchao Yu, Zhifeng Lin, Krishna Narra, Songze Li, Youjie Li, Nam Sung Kim, Alexander Schwing, Murali Annavaram, Salman Avestimehr

Figure 1 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 2 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 3 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 4 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Viaarxiv icon