Alert button
Picture for Chaofan Tao

Chaofan Tao

Alert button

Electrocardiogram Instruction Tuning for Report Generation

Mar 13, 2024
Zhongwei Wan, Che Liu, Xin Wang, Chaofan Tao, Hui Shen, Zhenwu Peng, Jie Fu, Rossella Arcucci, Huaxiu Yao, Mi Zhang

Figure 1 for Electrocardiogram Instruction Tuning for Report Generation
Figure 2 for Electrocardiogram Instruction Tuning for Report Generation
Figure 3 for Electrocardiogram Instruction Tuning for Report Generation
Figure 4 for Electrocardiogram Instruction Tuning for Report Generation
Viaarxiv icon

RoboCodeX: Multimodal Code Generation for Robotic Behavior Synthesis

Feb 25, 2024
Yao Mu, Junting Chen, Qinglong Zhang, Shoufa Chen, Qiaojun Yu, Chongjian Ge, Runjian Chen, Zhixuan Liang, Mengkang Hu, Chaofan Tao, Peize Sun, Haibao Yu, Chao Yang, Wenqi Shao, Wenhai Wang, Jifeng Dai, Yu Qiao, Mingyu Ding, Ping Luo

Viaarxiv icon

A Spectral Perspective towards Understanding and Improving Adversarial Robustness

Jun 25, 2023
Binxiao Huang, Rui Lin, Chaofan Tao, Ngai Wong

Figure 1 for A Spectral Perspective towards Understanding and Improving Adversarial Robustness
Figure 2 for A Spectral Perspective towards Understanding and Improving Adversarial Robustness
Figure 3 for A Spectral Perspective towards Understanding and Improving Adversarial Robustness
Figure 4 for A Spectral Perspective towards Understanding and Improving Adversarial Robustness
Viaarxiv icon

CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers

May 27, 2023
Dachuan Shi, Chaofan Tao, Anyi Rao, Zhendong Yang, Chun Yuan, Jiaqi Wang

Figure 1 for CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers
Figure 2 for CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers
Figure 3 for CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers
Figure 4 for CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers
Viaarxiv icon

DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference

Feb 24, 2023
Jiajun Zhou, Jiajun Wu, Yizhao Gao, Yuhao Ding, Chaofan Tao, Boyu Li, Fengbin Tu, Kwang-Ting Cheng, Hayden Kwok-Hay So, Ngai Wong

Figure 1 for DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference
Figure 2 for DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference
Figure 3 for DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference
Figure 4 for DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference
Viaarxiv icon

UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers

Jan 31, 2023
Dachuan Shi, Chaofan Tao, Ying Jin, Zhendong Yang, Chun Yuan, Jiaqi Wang

Figure 1 for UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers
Figure 2 for UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers
Figure 3 for UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers
Figure 4 for UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers
Viaarxiv icon

Frequency Regularization for Improving Adversarial Robustness

Dec 24, 2022
Binxiao Huang, Chaofan Tao, Rui Lin, Ngai Wong

Figure 1 for Frequency Regularization for Improving Adversarial Robustness
Figure 2 for Frequency Regularization for Improving Adversarial Robustness
Figure 3 for Frequency Regularization for Improving Adversarial Robustness
Figure 4 for Frequency Regularization for Improving Adversarial Robustness
Viaarxiv icon

ODG-Q: Robust Quantization via Online Domain Generalization

Oct 17, 2022
Chaofan Tao, Ngai Wong

Figure 1 for ODG-Q: Robust Quantization via Online Domain Generalization
Figure 2 for ODG-Q: Robust Quantization via Online Domain Generalization
Figure 3 for ODG-Q: Robust Quantization via Online Domain Generalization
Figure 4 for ODG-Q: Robust Quantization via Online Domain Generalization
Viaarxiv icon

Compression of Generative Pre-trained Language Models via Quantization

Mar 21, 2022
Chaofan Tao, Lu Hou, Wei Zhang, Lifeng Shang, Xin Jiang, Qun Liu, Ping Luo, Ngai Wong

Figure 1 for Compression of Generative Pre-trained Language Models via Quantization
Figure 2 for Compression of Generative Pre-trained Language Models via Quantization
Figure 3 for Compression of Generative Pre-trained Language Models via Quantization
Figure 4 for Compression of Generative Pre-trained Language Models via Quantization
Viaarxiv icon