Alert button
Picture for Qifan Xu

Qifan Xu

Alert button

An Efficient and Robust Method for Chest X-Ray Rib Suppression that Improves Pulmonary Abnormality Diagnosis

Add code
Bookmark button
Alert button
Feb 19, 2023
Di Xu, Qifan Xu, Kevin Nhieu, Dan Ruan, Ke Sheng

Figure 1 for An Efficient and Robust Method for Chest X-Ray Rib Suppression that Improves Pulmonary Abnormality Diagnosis
Figure 2 for An Efficient and Robust Method for Chest X-Ray Rib Suppression that Improves Pulmonary Abnormality Diagnosis
Figure 3 for An Efficient and Robust Method for Chest X-Ray Rib Suppression that Improves Pulmonary Abnormality Diagnosis
Figure 4 for An Efficient and Robust Method for Chest X-Ray Rib Suppression that Improves Pulmonary Abnormality Diagnosis
Viaarxiv icon

To what extent can Plug-and-Play methods outperform neural networks alone in low-dose CT reconstruction

Add code
Bookmark button
Alert button
Feb 15, 2022
Qifan Xu, Qihui Lyu, Dan Ruan, Ke Sheng

Figure 1 for To what extent can Plug-and-Play methods outperform neural networks alone in low-dose CT reconstruction
Figure 2 for To what extent can Plug-and-Play methods outperform neural networks alone in low-dose CT reconstruction
Figure 3 for To what extent can Plug-and-Play methods outperform neural networks alone in low-dose CT reconstruction
Figure 4 for To what extent can Plug-and-Play methods outperform neural networks alone in low-dose CT reconstruction
Viaarxiv icon

2.5-dimensional distributed model training

Add code
Bookmark button
Alert button
May 30, 2021
Boxiang Wang, Qifan Xu, Zhengda Bian, Yang You

Figure 1 for 2.5-dimensional distributed model training
Figure 2 for 2.5-dimensional distributed model training
Figure 3 for 2.5-dimensional distributed model training
Figure 4 for 2.5-dimensional distributed model training
Viaarxiv icon

Maximizing Parallelism in Distributed Training for Huge Neural Networks

Add code
Bookmark button
Alert button
May 30, 2021
Zhengda Bian, Qifan Xu, Boxiang Wang, Yang You

Figure 1 for Maximizing Parallelism in Distributed Training for Huge Neural Networks
Figure 2 for Maximizing Parallelism in Distributed Training for Huge Neural Networks
Figure 3 for Maximizing Parallelism in Distributed Training for Huge Neural Networks
Figure 4 for Maximizing Parallelism in Distributed Training for Huge Neural Networks
Viaarxiv icon

An Efficient 2D Method for Training Super-Large Deep Learning Models

Add code
Bookmark button
Alert button
Apr 12, 2021
Qifan Xu, Shenggui Li, Chaoyu Gong, Yang You

Figure 1 for An Efficient 2D Method for Training Super-Large Deep Learning Models
Figure 2 for An Efficient 2D Method for Training Super-Large Deep Learning Models
Figure 3 for An Efficient 2D Method for Training Super-Large Deep Learning Models
Figure 4 for An Efficient 2D Method for Training Super-Large Deep Learning Models
Viaarxiv icon