Alert button
Picture for Stephen Lin

Stephen Lin

Alert button

Image to Pseudo-Episode: Boosting Few-Shot Segmentation by Unlabeled Data

Add code
Bookmark button
Alert button
May 14, 2024
Jie Zhang, Yuhan Li, Yude Wang, Stephen Lin, Shiguang Shan

Viaarxiv icon

Unifying Feature and Cost Aggregation with Transformers for Semantic and Visual Correspondence

Add code
Bookmark button
Alert button
Mar 17, 2024
Sunghwan Hong, Seokju Cho, Seungryong Kim, Stephen Lin

Figure 1 for Unifying Feature and Cost Aggregation with Transformers for Semantic and Visual Correspondence
Figure 2 for Unifying Feature and Cost Aggregation with Transformers for Semantic and Visual Correspondence
Figure 3 for Unifying Feature and Cost Aggregation with Transformers for Semantic and Visual Correspondence
Figure 4 for Unifying Feature and Cost Aggregation with Transformers for Semantic and Visual Correspondence
Viaarxiv icon

Collaboratively Self-supervised Video Representation Learning for Action Recognition

Add code
Bookmark button
Alert button
Jan 15, 2024
Jie Zhang, Zhifan Wan, Lanqing Hu, Stephen Lin, Shuzhe Wu, Shiguang Shan

Viaarxiv icon

Exploring Transferability for Randomized Smoothing

Add code
Bookmark button
Alert button
Dec 14, 2023
Kai Qiu, Huishuai Zhang, Zhirong Wu, Stephen Lin

Viaarxiv icon

NuTime: Numerically Multi-Scaled Embedding for Large-Scale Time Series Pretraining

Add code
Bookmark button
Alert button
Oct 12, 2023
Chenguo Lin, Xumeng Wen, Wei Cao, Congrui Huang, Jiang Bian, Stephen Lin, Zhirong Wu

Figure 1 for NuTime: Numerically Multi-Scaled Embedding for Large-Scale Time Series Pretraining
Figure 2 for NuTime: Numerically Multi-Scaled Embedding for Large-Scale Time Series Pretraining
Figure 3 for NuTime: Numerically Multi-Scaled Embedding for Large-Scale Time Series Pretraining
Figure 4 for NuTime: Numerically Multi-Scaled Embedding for Large-Scale Time Series Pretraining
Viaarxiv icon

Associative Transformer Is A Sparse Representation Learner

Add code
Bookmark button
Alert button
Sep 22, 2023
Yuwei Sun, Hideya Ochiai, Zhirong Wu, Stephen Lin, Ryota Kanai

Figure 1 for Associative Transformer Is A Sparse Representation Learner
Figure 2 for Associative Transformer Is A Sparse Representation Learner
Figure 3 for Associative Transformer Is A Sparse Representation Learner
Figure 4 for Associative Transformer Is A Sparse Representation Learner
Viaarxiv icon

Randomized Quantization for Data Agnostic Representation Learning

Add code
Bookmark button
Alert button
Dec 19, 2022
Huimin Wu, Chenyang Lei, Xiao Sun, Peng-Shuai Wang, Qifeng Chen, Kwang-Ting Cheng, Stephen Lin, Zhirong Wu

Figure 1 for Randomized Quantization for Data Agnostic Representation Learning
Figure 2 for Randomized Quantization for Data Agnostic Representation Learning
Figure 3 for Randomized Quantization for Data Agnostic Representation Learning
Figure 4 for Randomized Quantization for Data Agnostic Representation Learning
Viaarxiv icon

ClipCrop: Conditioned Cropping Driven by Vision-Language Model

Add code
Bookmark button
Alert button
Nov 21, 2022
Zhihang Zhong, Mingxi Cheng, Zhirong Wu, Yuhui Yuan, Yinqiang Zheng, Ji Li, Han Hu, Stephen Lin, Yoichi Sato, Imari Sato

Figure 1 for ClipCrop: Conditioned Cropping Driven by Vision-Language Model
Figure 2 for ClipCrop: Conditioned Cropping Driven by Vision-Language Model
Figure 3 for ClipCrop: Conditioned Cropping Driven by Vision-Language Model
Figure 4 for ClipCrop: Conditioned Cropping Driven by Vision-Language Model
Viaarxiv icon

Local Magnification for Data and Feature Augmentation

Add code
Bookmark button
Alert button
Nov 15, 2022
Kun He, Chang Liu, Stephen Lin, John E. Hopcroft

Figure 1 for Local Magnification for Data and Feature Augmentation
Figure 2 for Local Magnification for Data and Feature Augmentation
Figure 3 for Local Magnification for Data and Feature Augmentation
Figure 4 for Local Magnification for Data and Feature Augmentation
Viaarxiv icon

Could Giant Pretrained Image Models Extract Universal Representations?

Add code
Bookmark button
Alert button
Nov 03, 2022
Yutong Lin, Ze Liu, Zheng Zhang, Han Hu, Nanning Zheng, Stephen Lin, Yue Cao

Figure 1 for Could Giant Pretrained Image Models Extract Universal Representations?
Figure 2 for Could Giant Pretrained Image Models Extract Universal Representations?
Figure 3 for Could Giant Pretrained Image Models Extract Universal Representations?
Figure 4 for Could Giant Pretrained Image Models Extract Universal Representations?
Viaarxiv icon