Alert button
Picture for Rongyao Fang

Rongyao Fang

Alert button

FouriScale: A Frequency Perspective on Training-Free High-Resolution Image Synthesis

Add code
Bookmark button
Alert button
Mar 19, 2024
Linjiang Huang, Rongyao Fang, Aiping Zhang, Guanglu Song, Si Liu, Yu Liu, Hongsheng Li

Figure 1 for FouriScale: A Frequency Perspective on Training-Free High-Resolution Image Synthesis
Figure 2 for FouriScale: A Frequency Perspective on Training-Free High-Resolution Image Synthesis
Figure 3 for FouriScale: A Frequency Perspective on Training-Free High-Resolution Image Synthesis
Figure 4 for FouriScale: A Frequency Perspective on Training-Free High-Resolution Image Synthesis
Viaarxiv icon

InstructSeq: Unifying Vision Tasks with Instruction-conditioned Multi-modal Sequence Generation

Add code
Bookmark button
Alert button
Nov 30, 2023
Rongyao Fang, Shilin Yan, Zhaoyang Huang, Jingqiu Zhou, Hao Tian, Jifeng Dai, Hongsheng Li

Viaarxiv icon

Mimic before Reconstruct: Enhancing Masked Autoencoders with Feature Mimicking

Add code
Bookmark button
Alert button
Mar 09, 2023
Peng Gao, Renrui Zhang, Rongyao Fang, Ziyi Lin, Hongyang Li, Hongsheng Li, Qiao Yu

Figure 1 for Mimic before Reconstruct: Enhancing Masked Autoencoders with Feature Mimicking
Figure 2 for Mimic before Reconstruct: Enhancing Masked Autoencoders with Feature Mimicking
Figure 3 for Mimic before Reconstruct: Enhancing Masked Autoencoders with Feature Mimicking
Figure 4 for Mimic before Reconstruct: Enhancing Masked Autoencoders with Feature Mimicking
Viaarxiv icon

FeatAug-DETR: Enriching One-to-Many Matching for DETRs with Feature Augmentation

Add code
Bookmark button
Alert button
Mar 02, 2023
Rongyao Fang, Peng Gao, Aojun Zhou, Yingjie Cai, Si Liu, Jifeng Dai, Hongsheng Li

Figure 1 for FeatAug-DETR: Enriching One-to-Many Matching for DETRs with Feature Augmentation
Figure 2 for FeatAug-DETR: Enriching One-to-Many Matching for DETRs with Feature Augmentation
Figure 3 for FeatAug-DETR: Enriching One-to-Many Matching for DETRs with Feature Augmentation
Figure 4 for FeatAug-DETR: Enriching One-to-Many Matching for DETRs with Feature Augmentation
Viaarxiv icon

Tip-Adapter: Training-free Adaption of CLIP for Few-shot Classification

Add code
Bookmark button
Alert button
Jul 19, 2022
Renrui Zhang, Zhang Wei, Rongyao Fang, Peng Gao, Kunchang Li, Jifeng Dai, Yu Qiao, Hongsheng Li

Figure 1 for Tip-Adapter: Training-free Adaption of CLIP for Few-shot Classification
Figure 2 for Tip-Adapter: Training-free Adaption of CLIP for Few-shot Classification
Figure 3 for Tip-Adapter: Training-free Adaption of CLIP for Few-shot Classification
Figure 4 for Tip-Adapter: Training-free Adaption of CLIP for Few-shot Classification
Viaarxiv icon

Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training

Add code
Bookmark button
Alert button
May 28, 2022
Renrui Zhang, Ziyu Guo, Peng Gao, Rongyao Fang, Bin Zhao, Dong Wang, Yu Qiao, Hongsheng Li

Figure 1 for Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training
Figure 2 for Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training
Figure 3 for Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training
Figure 4 for Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training
Viaarxiv icon

RBGNet: Ray-based Grouping for 3D Object Detection

Add code
Bookmark button
Alert button
Apr 05, 2022
Haiyang Wang, Shaoshuai Shi, Ze Yang, Rongyao Fang, Qi Qian, Hongsheng Li, Bernt Schiele, Liwei Wang

Figure 1 for RBGNet: Ray-based Grouping for 3D Object Detection
Figure 2 for RBGNet: Ray-based Grouping for 3D Object Detection
Figure 3 for RBGNet: Ray-based Grouping for 3D Object Detection
Figure 4 for RBGNet: Ray-based Grouping for 3D Object Detection
Viaarxiv icon

Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling

Add code
Bookmark button
Alert button
Nov 15, 2021
Renrui Zhang, Rongyao Fang, Wei Zhang, Peng Gao, Kunchang Li, Jifeng Dai, Yu Qiao, Hongsheng Li

Figure 1 for Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling
Figure 2 for Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling
Figure 3 for Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling
Figure 4 for Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling
Viaarxiv icon

CLIP-Adapter: Better Vision-Language Models with Feature Adapters

Add code
Bookmark button
Alert button
Oct 09, 2021
Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, Yu Qiao

Figure 1 for CLIP-Adapter: Better Vision-Language Models with Feature Adapters
Figure 2 for CLIP-Adapter: Better Vision-Language Models with Feature Adapters
Figure 3 for CLIP-Adapter: Better Vision-Language Models with Feature Adapters
Figure 4 for CLIP-Adapter: Better Vision-Language Models with Feature Adapters
Viaarxiv icon